Jun 25 14:36:07.857902 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 14:36:07.857923 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT Tue Jun 25 13:19:44 -00 2024 Jun 25 14:36:07.857931 kernel: efi: EFI v2.70 by EDK II Jun 25 14:36:07.857937 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9210018 MEMRESERVE=0xd9523d18 Jun 25 14:36:07.857942 kernel: random: crng init done Jun 25 14:36:07.857948 kernel: ACPI: Early table checksum verification disabled Jun 25 14:36:07.857954 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jun 25 14:36:07.857961 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jun 25 14:36:07.857967 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:36:07.857972 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:36:07.857978 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:36:07.857983 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:36:07.857989 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:36:07.857995 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:36:07.858004 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:36:07.858010 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:36:07.858016 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 14:36:07.858022 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jun 25 14:36:07.858028 kernel: NUMA: Failed to initialise from firmware Jun 25 14:36:07.858035 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 14:36:07.858041 kernel: NUMA: NODE_DATA [mem 0xdcb07800-0xdcb0cfff] Jun 25 14:36:07.858047 kernel: Zone ranges: Jun 25 14:36:07.858053 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 14:36:07.858063 kernel: DMA32 empty Jun 25 14:36:07.858069 kernel: Normal empty Jun 25 14:36:07.858075 kernel: Movable zone start for each node Jun 25 14:36:07.858081 kernel: Early memory node ranges Jun 25 14:36:07.858087 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jun 25 14:36:07.858093 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jun 25 14:36:07.858100 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jun 25 14:36:07.858106 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jun 25 14:36:07.858112 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jun 25 14:36:07.858118 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jun 25 14:36:07.858124 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jun 25 14:36:07.858130 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jun 25 14:36:07.858138 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jun 25 14:36:07.858144 kernel: psci: probing for conduit method from ACPI. Jun 25 14:36:07.858150 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 14:36:07.858156 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 14:36:07.858162 kernel: psci: Trusted OS migration not required Jun 25 14:36:07.858170 kernel: psci: SMC Calling Convention v1.1 Jun 25 14:36:07.858177 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jun 25 14:36:07.858185 kernel: percpu: Embedded 30 pages/cpu s83880 r8192 d30808 u122880 Jun 25 14:36:07.858191 kernel: pcpu-alloc: s83880 r8192 d30808 u122880 alloc=30*4096 Jun 25 14:36:07.858198 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jun 25 14:36:07.858204 kernel: Detected PIPT I-cache on CPU0 Jun 25 14:36:07.858210 kernel: CPU features: detected: GIC system register CPU interface Jun 25 14:36:07.858216 kernel: CPU features: detected: Hardware dirty bit management Jun 25 14:36:07.858222 kernel: CPU features: detected: Spectre-v4 Jun 25 14:36:07.858228 kernel: CPU features: detected: Spectre-BHB Jun 25 14:36:07.858235 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 14:36:07.858242 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 14:36:07.858249 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 14:36:07.858255 kernel: alternatives: applying boot alternatives Jun 25 14:36:07.858261 kernel: Fallback order for Node 0: 0 Jun 25 14:36:07.858267 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jun 25 14:36:07.858274 kernel: Policy zone: DMA Jun 25 14:36:07.858281 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:36:07.858288 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 14:36:07.858294 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 14:36:07.858301 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 14:36:07.858307 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 14:36:07.858316 kernel: Memory: 2458544K/2572288K available (9984K kernel code, 2108K rwdata, 7720K rodata, 34688K init, 894K bss, 113744K reserved, 0K cma-reserved) Jun 25 14:36:07.858323 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 14:36:07.858329 kernel: trace event string verifier disabled Jun 25 14:36:07.858336 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 14:36:07.858417 kernel: rcu: RCU event tracing is enabled. Jun 25 14:36:07.858425 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 14:36:07.858431 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 14:36:07.858438 kernel: Tracing variant of Tasks RCU enabled. Jun 25 14:36:07.858445 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 14:36:07.858451 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 14:36:07.858458 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 14:36:07.858464 kernel: GICv3: 256 SPIs implemented Jun 25 14:36:07.858473 kernel: GICv3: 0 Extended SPIs implemented Jun 25 14:36:07.858479 kernel: Root IRQ handler: gic_handle_irq Jun 25 14:36:07.858485 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 14:36:07.858492 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jun 25 14:36:07.858498 kernel: ITS [mem 0x08080000-0x0809ffff] Jun 25 14:36:07.858505 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jun 25 14:36:07.858512 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jun 25 14:36:07.858518 kernel: GICv3: using LPI property table @0x00000000400e0000 Jun 25 14:36:07.858525 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400f0000 Jun 25 14:36:07.858531 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 14:36:07.858538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:36:07.858545 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 14:36:07.858552 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 14:36:07.858559 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 14:36:07.858565 kernel: arm-pv: using stolen time PV Jun 25 14:36:07.858572 kernel: Console: colour dummy device 80x25 Jun 25 14:36:07.858579 kernel: ACPI: Core revision 20220331 Jun 25 14:36:07.858586 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 14:36:07.858593 kernel: pid_max: default: 32768 minimum: 301 Jun 25 14:36:07.858599 kernel: LSM: Security Framework initializing Jun 25 14:36:07.858606 kernel: SELinux: Initializing. Jun 25 14:36:07.858614 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:36:07.858620 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:36:07.858627 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:36:07.858634 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 14:36:07.858640 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:36:07.858647 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 14:36:07.858653 kernel: rcu: Hierarchical SRCU implementation. Jun 25 14:36:07.858660 kernel: rcu: Max phase no-delay instances is 400. Jun 25 14:36:07.858705 kernel: Platform MSI: ITS@0x8080000 domain created Jun 25 14:36:07.858716 kernel: PCI/MSI: ITS@0x8080000 domain created Jun 25 14:36:07.858723 kernel: Remapping and enabling EFI services. Jun 25 14:36:07.858729 kernel: smp: Bringing up secondary CPUs ... Jun 25 14:36:07.858736 kernel: Detected PIPT I-cache on CPU1 Jun 25 14:36:07.858742 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jun 25 14:36:07.858749 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040100000 Jun 25 14:36:07.858756 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:36:07.858762 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 14:36:07.858769 kernel: Detected PIPT I-cache on CPU2 Jun 25 14:36:07.858776 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jun 25 14:36:07.858784 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040110000 Jun 25 14:36:07.858790 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:36:07.858797 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jun 25 14:36:07.858804 kernel: Detected PIPT I-cache on CPU3 Jun 25 14:36:07.858815 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jun 25 14:36:07.858829 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040120000 Jun 25 14:36:07.858838 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:36:07.858844 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jun 25 14:36:07.858851 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 14:36:07.858858 kernel: SMP: Total of 4 processors activated. Jun 25 14:36:07.858865 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 14:36:07.858875 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 14:36:07.858882 kernel: CPU features: detected: Common not Private translations Jun 25 14:36:07.858889 kernel: CPU features: detected: CRC32 instructions Jun 25 14:36:07.858896 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 14:36:07.858903 kernel: CPU features: detected: LSE atomic instructions Jun 25 14:36:07.858910 kernel: CPU features: detected: Privileged Access Never Jun 25 14:36:07.858919 kernel: CPU features: detected: RAS Extension Support Jun 25 14:36:07.858926 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 25 14:36:07.858936 kernel: CPU: All CPU(s) started at EL1 Jun 25 14:36:07.858943 kernel: alternatives: applying system-wide alternatives Jun 25 14:36:07.858950 kernel: devtmpfs: initialized Jun 25 14:36:07.858958 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 14:36:07.858965 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 14:36:07.858972 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 14:36:07.858979 kernel: SMBIOS 3.0.0 present. Jun 25 14:36:07.858987 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jun 25 14:36:07.858994 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 14:36:07.859002 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 14:36:07.859009 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 14:36:07.859016 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 14:36:07.859023 kernel: audit: initializing netlink subsys (disabled) Jun 25 14:36:07.859030 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Jun 25 14:36:07.859037 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 14:36:07.859044 kernel: cpuidle: using governor menu Jun 25 14:36:07.859052 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 14:36:07.859061 kernel: ASID allocator initialised with 32768 entries Jun 25 14:36:07.859068 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 14:36:07.859075 kernel: Serial: AMBA PL011 UART driver Jun 25 14:36:07.859082 kernel: KASLR enabled Jun 25 14:36:07.859089 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 14:36:07.859096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 14:36:07.859103 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 14:36:07.859110 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 14:36:07.859119 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 14:36:07.859126 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 14:36:07.859132 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 14:36:07.859139 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 14:36:07.859146 kernel: ACPI: Added _OSI(Module Device) Jun 25 14:36:07.859153 kernel: ACPI: Added _OSI(Processor Device) Jun 25 14:36:07.859160 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 14:36:07.859167 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 14:36:07.859174 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 14:36:07.859182 kernel: ACPI: Interpreter enabled Jun 25 14:36:07.859189 kernel: ACPI: Using GIC for interrupt routing Jun 25 14:36:07.859196 kernel: ACPI: MCFG table detected, 1 entries Jun 25 14:36:07.859203 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jun 25 14:36:07.859210 kernel: printk: console [ttyAMA0] enabled Jun 25 14:36:07.859217 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 14:36:07.859367 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 14:36:07.859439 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 25 14:36:07.859513 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 25 14:36:07.859576 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jun 25 14:36:07.859637 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jun 25 14:36:07.859647 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jun 25 14:36:07.859654 kernel: PCI host bridge to bus 0000:00 Jun 25 14:36:07.859762 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jun 25 14:36:07.859834 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 25 14:36:07.859899 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jun 25 14:36:07.859965 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 14:36:07.860054 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jun 25 14:36:07.860142 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 14:36:07.860208 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jun 25 14:36:07.860274 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jun 25 14:36:07.860360 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 14:36:07.860440 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jun 25 14:36:07.860505 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jun 25 14:36:07.860598 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jun 25 14:36:07.860659 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jun 25 14:36:07.860720 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 25 14:36:07.860788 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jun 25 14:36:07.860798 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 25 14:36:07.860807 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 25 14:36:07.860815 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 25 14:36:07.860822 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 25 14:36:07.860837 kernel: iommu: Default domain type: Translated Jun 25 14:36:07.860844 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 14:36:07.860851 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 14:36:07.860859 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 14:36:07.860866 kernel: PTP clock support registered Jun 25 14:36:07.860873 kernel: Registered efivars operations Jun 25 14:36:07.860883 kernel: vgaarb: loaded Jun 25 14:36:07.860889 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 14:36:07.860896 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 14:36:07.860904 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 14:36:07.860911 kernel: pnp: PnP ACPI init Jun 25 14:36:07.860989 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jun 25 14:36:07.861003 kernel: pnp: PnP ACPI: found 1 devices Jun 25 14:36:07.861012 kernel: NET: Registered PF_INET protocol family Jun 25 14:36:07.861021 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 14:36:07.861030 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 14:36:07.861040 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 14:36:07.861047 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 14:36:07.861054 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 14:36:07.861062 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 14:36:07.861071 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:36:07.861079 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:36:07.861086 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 14:36:07.861095 kernel: PCI: CLS 0 bytes, default 64 Jun 25 14:36:07.861102 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jun 25 14:36:07.861109 kernel: kvm [1]: HYP mode not available Jun 25 14:36:07.861116 kernel: Initialise system trusted keyrings Jun 25 14:36:07.861123 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 14:36:07.861130 kernel: Key type asymmetric registered Jun 25 14:36:07.861137 kernel: Asymmetric key parser 'x509' registered Jun 25 14:36:07.861144 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 14:36:07.861151 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 14:36:07.861159 kernel: io scheduler mq-deadline registered Jun 25 14:36:07.861166 kernel: io scheduler kyber registered Jun 25 14:36:07.861173 kernel: io scheduler bfq registered Jun 25 14:36:07.861180 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 25 14:36:07.861187 kernel: ACPI: button: Power Button [PWRB] Jun 25 14:36:07.861195 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 25 14:36:07.861264 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jun 25 14:36:07.861274 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 14:36:07.861282 kernel: thunder_xcv, ver 1.0 Jun 25 14:36:07.861291 kernel: thunder_bgx, ver 1.0 Jun 25 14:36:07.861298 kernel: nicpf, ver 1.0 Jun 25 14:36:07.861305 kernel: nicvf, ver 1.0 Jun 25 14:36:07.861391 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 14:36:07.861452 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T14:36:07 UTC (1719326167) Jun 25 14:36:07.861461 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 14:36:07.861468 kernel: NET: Registered PF_INET6 protocol family Jun 25 14:36:07.861475 kernel: Segment Routing with IPv6 Jun 25 14:36:07.861484 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 14:36:07.861491 kernel: NET: Registered PF_PACKET protocol family Jun 25 14:36:07.861498 kernel: Key type dns_resolver registered Jun 25 14:36:07.861505 kernel: registered taskstats version 1 Jun 25 14:36:07.861512 kernel: Loading compiled-in X.509 certificates Jun 25 14:36:07.861519 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: 0fa2e892f90caac26ef50b6d7e7f5c106b0c7e83' Jun 25 14:36:07.861527 kernel: Key type .fscrypt registered Jun 25 14:36:07.861534 kernel: Key type fscrypt-provisioning registered Jun 25 14:36:07.861541 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 14:36:07.861550 kernel: ima: Allocated hash algorithm: sha1 Jun 25 14:36:07.861557 kernel: ima: No architecture policies found Jun 25 14:36:07.861564 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 14:36:07.861572 kernel: clk: Disabling unused clocks Jun 25 14:36:07.861578 kernel: Freeing unused kernel memory: 34688K Jun 25 14:36:07.861585 kernel: Run /init as init process Jun 25 14:36:07.861592 kernel: with arguments: Jun 25 14:36:07.861599 kernel: /init Jun 25 14:36:07.861606 kernel: with environment: Jun 25 14:36:07.861614 kernel: HOME=/ Jun 25 14:36:07.861621 kernel: TERM=linux Jun 25 14:36:07.861628 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 14:36:07.861636 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:36:07.861646 systemd[1]: Detected virtualization kvm. Jun 25 14:36:07.861654 systemd[1]: Detected architecture arm64. Jun 25 14:36:07.861662 systemd[1]: Running in initrd. Jun 25 14:36:07.861669 systemd[1]: No hostname configured, using default hostname. Jun 25 14:36:07.861678 systemd[1]: Hostname set to . Jun 25 14:36:07.861686 systemd[1]: Initializing machine ID from VM UUID. Jun 25 14:36:07.861694 systemd[1]: Queued start job for default target initrd.target. Jun 25 14:36:07.861735 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:36:07.861743 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:36:07.861751 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:36:07.861759 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:36:07.861766 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:36:07.861776 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:36:07.861785 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:36:07.861794 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:36:07.861802 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 14:36:07.861810 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 14:36:07.861818 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 14:36:07.861830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:36:07.861840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:36:07.861848 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:36:07.861856 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:36:07.861863 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:36:07.861871 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 14:36:07.861878 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 14:36:07.861886 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:36:07.861894 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:36:07.861902 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 14:36:07.861911 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:36:07.861919 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 14:36:07.861927 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:36:07.861935 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:36:07.861943 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 14:36:07.861951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:36:07.861959 kernel: audit: type=1130 audit(1719326167.854:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.861967 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 14:36:07.861980 systemd-journald[224]: Journal started Jun 25 14:36:07.862029 systemd-journald[224]: Runtime Journal (/run/log/journal/db98be183b284e049923b27ba434040f) is 6.0M, max 48.6M, 42.6M free. Jun 25 14:36:07.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.844883 systemd-modules-load[226]: Inserted module 'overlay' Jun 25 14:36:07.863719 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:36:07.864472 systemd-modules-load[226]: Inserted module 'br_netfilter' Jun 25 14:36:07.868034 kernel: Bridge firewalling registered Jun 25 14:36:07.868055 kernel: audit: type=1130 audit(1719326167.864:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.871609 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:36:07.877985 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:36:07.882384 kernel: audit: type=1130 audit(1719326167.878:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.882407 kernel: SCSI subsystem initialized Jun 25 14:36:07.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.879093 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:36:07.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.884131 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 14:36:07.888300 kernel: audit: type=1130 audit(1719326167.883:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.888328 kernel: audit: type=1334 audit(1719326167.887:6): prog-id=6 op=LOAD Jun 25 14:36:07.887000 audit: BPF prog-id=6 op=LOAD Jun 25 14:36:07.888193 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:36:07.893129 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 14:36:07.893149 kernel: device-mapper: uevent: version 1.0.3 Jun 25 14:36:07.893166 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 14:36:07.894073 dracut-cmdline[245]: dracut-dracut-053 Jun 25 14:36:07.895390 systemd-modules-load[226]: Inserted module 'dm_multipath' Jun 25 14:36:07.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.896491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:36:07.900943 kernel: audit: type=1130 audit(1719326167.897:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.901005 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:36:07.905533 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:36:07.912864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:36:07.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.917357 kernel: audit: type=1130 audit(1719326167.912:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.923140 systemd-resolved[249]: Positive Trust Anchors: Jun 25 14:36:07.923157 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:36:07.923187 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:36:07.928463 systemd-resolved[249]: Defaulting to hostname 'linux'. Jun 25 14:36:07.929472 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:36:07.930763 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:36:07.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.935367 kernel: audit: type=1130 audit(1719326167.930:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:07.978366 kernel: Loading iSCSI transport class v2.0-870. Jun 25 14:36:07.986367 kernel: iscsi: registered transport (tcp) Jun 25 14:36:07.999359 kernel: iscsi: registered transport (qla4xxx) Jun 25 14:36:07.999381 kernel: QLogic iSCSI HBA Driver Jun 25 14:36:08.046869 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 14:36:08.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:08.050374 kernel: audit: type=1130 audit(1719326168.047:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:08.057548 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 14:36:08.117383 kernel: raid6: neonx8 gen() 14827 MB/s Jun 25 14:36:08.134422 kernel: raid6: neonx4 gen() 11556 MB/s Jun 25 14:36:08.151363 kernel: raid6: neonx2 gen() 10538 MB/s Jun 25 14:36:08.168356 kernel: raid6: neonx1 gen() 10469 MB/s Jun 25 14:36:08.185357 kernel: raid6: int64x8 gen() 6881 MB/s Jun 25 14:36:08.202357 kernel: raid6: int64x4 gen() 7280 MB/s Jun 25 14:36:08.219358 kernel: raid6: int64x2 gen() 6080 MB/s Jun 25 14:36:08.236357 kernel: raid6: int64x1 gen() 5046 MB/s Jun 25 14:36:08.236370 kernel: raid6: using algorithm neonx8 gen() 14827 MB/s Jun 25 14:36:08.253361 kernel: raid6: .... xor() 11889 MB/s, rmw enabled Jun 25 14:36:08.253375 kernel: raid6: using neon recovery algorithm Jun 25 14:36:08.258361 kernel: xor: measuring software checksum speed Jun 25 14:36:08.259359 kernel: 8regs : 19878 MB/sec Jun 25 14:36:08.260629 kernel: 32regs : 19640 MB/sec Jun 25 14:36:08.260641 kernel: arm64_neon : 27072 MB/sec Jun 25 14:36:08.260650 kernel: xor: using function: arm64_neon (27072 MB/sec) Jun 25 14:36:08.325372 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jun 25 14:36:08.342687 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:36:08.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:08.344000 audit: BPF prog-id=7 op=LOAD Jun 25 14:36:08.344000 audit: BPF prog-id=8 op=LOAD Jun 25 14:36:08.357584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:36:08.385953 systemd-udevd[427]: Using default interface naming scheme 'v252'. Jun 25 14:36:08.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:08.389437 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:36:08.391374 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 14:36:08.404321 dracut-pre-trigger[433]: rd.md=0: removing MD RAID activation Jun 25 14:36:08.444388 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:36:08.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:08.454009 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:36:08.490964 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:36:08.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:08.519750 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jun 25 14:36:08.525852 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 14:36:08.525960 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 14:36:08.525970 kernel: GPT:9289727 != 19775487 Jun 25 14:36:08.525979 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 14:36:08.525988 kernel: GPT:9289727 != 19775487 Jun 25 14:36:08.525996 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 14:36:08.526004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 14:36:08.536997 kernel: BTRFS: device fsid 4f04fb4d-edd3-40b1-b587-481b761003a7 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (485) Jun 25 14:36:08.537058 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (490) Jun 25 14:36:08.540799 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 14:36:08.544123 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 14:36:08.548680 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 14:36:08.549518 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 14:36:08.553464 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 14:36:08.564669 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 14:36:08.571028 disk-uuid[497]: Primary Header is updated. Jun 25 14:36:08.571028 disk-uuid[497]: Secondary Entries is updated. Jun 25 14:36:08.571028 disk-uuid[497]: Secondary Header is updated. Jun 25 14:36:08.573786 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 14:36:09.590373 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 14:36:09.590423 disk-uuid[498]: The operation has completed successfully. Jun 25 14:36:09.614053 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 14:36:09.615117 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 14:36:09.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:09.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:09.629743 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 14:36:09.632400 sh[511]: Success Jun 25 14:36:09.650375 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 14:36:09.678709 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 14:36:09.700524 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 14:36:09.702392 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 14:36:09.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:09.709901 kernel: BTRFS info (device dm-0): first mount of filesystem 4f04fb4d-edd3-40b1-b587-481b761003a7 Jun 25 14:36:09.709935 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:36:09.709946 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 14:36:09.710705 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 14:36:09.711771 kernel: BTRFS info (device dm-0): using free space tree Jun 25 14:36:09.715197 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 14:36:09.716018 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 14:36:09.729650 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 14:36:09.731946 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 14:36:09.739070 kernel: BTRFS info (device vda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:36:09.739107 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:36:09.739117 kernel: BTRFS info (device vda6): using free space tree Jun 25 14:36:09.748616 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 14:36:09.750562 kernel: BTRFS info (device vda6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:36:09.755310 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 14:36:09.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:09.758663 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 14:36:09.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:09.844000 audit: BPF prog-id=9 op=LOAD Jun 25 14:36:09.843113 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:36:09.856678 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:36:09.902650 systemd-networkd[700]: lo: Link UP Jun 25 14:36:09.902664 systemd-networkd[700]: lo: Gained carrier Jun 25 14:36:09.903031 systemd-networkd[700]: Enumeration completed Jun 25 14:36:09.903128 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:36:09.904440 systemd[1]: Reached target network.target - Network. Jun 25 14:36:09.906314 systemd-networkd[700]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:36:09.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:09.906317 systemd-networkd[700]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:36:09.907682 systemd-networkd[700]: eth0: Link UP Jun 25 14:36:09.907685 systemd-networkd[700]: eth0: Gained carrier Jun 25 14:36:09.907690 systemd-networkd[700]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:36:09.918570 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:36:09.926036 ignition[605]: Ignition 2.15.0 Jun 25 14:36:09.926047 ignition[605]: Stage: fetch-offline Jun 25 14:36:09.926090 ignition[605]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:36:09.926099 ignition[605]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:36:09.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:09.927769 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:36:09.926188 ignition[605]: parsed url from cmdline: "" Jun 25 14:36:09.929225 systemd-networkd[700]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 14:36:09.926192 ignition[605]: no config URL provided Jun 25 14:36:09.930066 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 14:36:09.926196 ignition[605]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 14:36:09.935155 iscsid[708]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:36:09.935155 iscsid[708]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 14:36:09.935155 iscsid[708]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 14:36:09.935155 iscsid[708]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 14:36:09.935155 iscsid[708]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:36:09.935155 iscsid[708]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 14:36:09.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:09.926204 ignition[605]: no config at "/usr/lib/ignition/user.ign" Jun 25 14:36:09.938054 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 14:36:09.926229 ignition[605]: op(1): [started] loading QEMU firmware config module Jun 25 14:36:09.941960 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 14:36:09.926234 ignition[605]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 14:36:09.951273 ignition[605]: op(1): [finished] loading QEMU firmware config module Jun 25 14:36:09.954336 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 14:36:09.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:09.955683 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:36:09.957602 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:36:09.959747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:36:09.971541 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 14:36:09.979790 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:36:09.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:09.999997 ignition[605]: parsing config with SHA512: ff9fbb021f937dcc4ebbeb1988de22a2f10a7af4dfe0a3d9b126e34124c02120c224654282c30087a43de624bec2486a9ac836b3868c6e7b44f41cb396aaa40d Jun 25 14:36:10.005328 unknown[605]: fetched base config from "system" Jun 25 14:36:10.005358 unknown[605]: fetched user config from "qemu" Jun 25 14:36:10.006242 ignition[605]: fetch-offline: fetch-offline passed Jun 25 14:36:10.006305 ignition[605]: Ignition finished successfully Jun 25 14:36:10.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:10.007259 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:36:10.008322 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 14:36:10.015562 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 14:36:10.027686 ignition[722]: Ignition 2.15.0 Jun 25 14:36:10.027697 ignition[722]: Stage: kargs Jun 25 14:36:10.027809 ignition[722]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:36:10.027828 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:36:10.028783 ignition[722]: kargs: kargs passed Jun 25 14:36:10.030615 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 14:36:10.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:10.028838 ignition[722]: Ignition finished successfully Jun 25 14:36:10.043554 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 14:36:10.053503 ignition[730]: Ignition 2.15.0 Jun 25 14:36:10.053512 ignition[730]: Stage: disks Jun 25 14:36:10.053612 ignition[730]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:36:10.053621 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:36:10.056131 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 14:36:10.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:10.054537 ignition[730]: disks: disks passed Jun 25 14:36:10.057628 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 14:36:10.054583 ignition[730]: Ignition finished successfully Jun 25 14:36:10.058867 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:36:10.059999 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:36:10.061346 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:36:10.062474 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:36:10.074532 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 14:36:10.085444 systemd-fsck[740]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 14:36:10.089094 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 14:36:10.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:10.091264 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 14:36:10.136361 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 14:36:10.136443 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 14:36:10.137155 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 14:36:10.147451 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:36:10.148955 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 14:36:10.150100 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 14:36:10.150132 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 14:36:10.158421 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (746) Jun 25 14:36:10.158443 kernel: BTRFS info (device vda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:36:10.158454 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:36:10.158463 kernel: BTRFS info (device vda6): using free space tree Jun 25 14:36:10.150156 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:36:10.152677 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 14:36:10.155382 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 14:36:10.162014 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:36:10.194203 initrd-setup-root[770]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 14:36:10.197413 initrd-setup-root[777]: cut: /sysroot/etc/group: No such file or directory Jun 25 14:36:10.201252 initrd-setup-root[784]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 14:36:10.203805 initrd-setup-root[791]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 14:36:10.267552 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 14:36:10.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:10.275483 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 14:36:10.276870 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 14:36:10.281351 kernel: BTRFS info (device vda6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:36:10.295071 ignition[858]: INFO : Ignition 2.15.0 Jun 25 14:36:10.295071 ignition[858]: INFO : Stage: mount Jun 25 14:36:10.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:10.296940 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:36:10.296940 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:36:10.296940 ignition[858]: INFO : mount: mount passed Jun 25 14:36:10.296940 ignition[858]: INFO : Ignition finished successfully Jun 25 14:36:10.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:10.295613 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 14:36:10.297261 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 14:36:10.303488 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 14:36:10.708967 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 14:36:10.717615 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:36:10.724238 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (869) Jun 25 14:36:10.724265 kernel: BTRFS info (device vda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:36:10.724276 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:36:10.725050 kernel: BTRFS info (device vda6): using free space tree Jun 25 14:36:10.728385 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:36:10.740445 ignition[887]: INFO : Ignition 2.15.0 Jun 25 14:36:10.740445 ignition[887]: INFO : Stage: files Jun 25 14:36:10.741754 ignition[887]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:36:10.741754 ignition[887]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:36:10.741754 ignition[887]: DEBUG : files: compiled without relabeling support, skipping Jun 25 14:36:10.744598 ignition[887]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 14:36:10.744598 ignition[887]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 14:36:10.747474 ignition[887]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 14:36:10.748613 ignition[887]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 14:36:10.748613 ignition[887]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 14:36:10.747967 unknown[887]: wrote ssh authorized keys file for user: core Jun 25 14:36:10.751593 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:36:10.751593 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 14:36:10.785889 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 14:36:10.837421 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:36:10.838891 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 14:36:10.838891 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 14:36:10.838891 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:36:10.843306 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:36:10.843306 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:36:10.843306 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:36:10.843306 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:36:10.843306 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:36:10.843306 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:36:10.843306 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:36:10.843306 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 14:36:10.843306 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 14:36:10.843306 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 14:36:10.843306 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jun 25 14:36:11.135176 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 14:36:11.357498 ignition[887]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 14:36:11.357498 ignition[887]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 14:36:11.360558 ignition[887]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:36:11.362315 ignition[887]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:36:11.363696 ignition[887]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 14:36:11.364681 ignition[887]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 25 14:36:11.365661 ignition[887]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 14:36:11.367356 ignition[887]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 14:36:11.367356 ignition[887]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 25 14:36:11.367356 ignition[887]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 14:36:11.367356 ignition[887]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 14:36:11.393118 ignition[887]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 14:36:11.394377 ignition[887]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 14:36:11.394377 ignition[887]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 25 14:36:11.394377 ignition[887]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 14:36:11.394377 ignition[887]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:36:11.394377 ignition[887]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:36:11.394377 ignition[887]: INFO : files: files passed Jun 25 14:36:11.394377 ignition[887]: INFO : Ignition finished successfully Jun 25 14:36:11.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.395547 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 14:36:11.403628 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 14:36:11.405888 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 14:36:11.406850 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 14:36:11.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.406946 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 14:36:11.410398 initrd-setup-root-after-ignition[912]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 14:36:11.413795 initrd-setup-root-after-ignition[914]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:36:11.413795 initrd-setup-root-after-ignition[914]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:36:11.416363 initrd-setup-root-after-ignition[918]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:36:11.416968 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:36:11.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.418497 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 14:36:11.420887 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 14:36:11.433444 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 14:36:11.433537 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 14:36:11.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.435063 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 14:36:11.436454 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 14:36:11.437860 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 14:36:11.438616 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 14:36:11.449963 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:36:11.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.451762 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 14:36:11.459908 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:36:11.460896 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:36:11.462461 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 14:36:11.463851 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 14:36:11.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.463962 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:36:11.465288 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 14:36:11.466459 systemd[1]: Stopped target basic.target - Basic System. Jun 25 14:36:11.467781 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 14:36:11.469128 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:36:11.470437 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 14:36:11.471845 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 14:36:11.473299 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:36:11.474749 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 14:36:11.476102 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 14:36:11.477612 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:36:11.478952 systemd[1]: Stopped target swap.target - Swaps. Jun 25 14:36:11.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.480142 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 14:36:11.480256 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:36:11.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.481847 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:36:11.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.483027 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 14:36:11.483122 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 14:36:11.484415 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 14:36:11.484509 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:36:11.486000 systemd[1]: Stopped target paths.target - Path Units. Jun 25 14:36:11.487158 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 14:36:11.490399 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:36:11.491713 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 14:36:11.493056 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 14:36:11.494717 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 14:36:11.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.494799 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:36:11.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.495945 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 14:36:11.496040 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:36:11.497162 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 14:36:11.497247 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 14:36:11.511721 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 14:36:11.512725 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:36:11.515388 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 14:36:11.516151 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 14:36:11.516270 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:36:11.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.517810 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 14:36:11.517919 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:36:11.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.521054 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 14:36:11.521178 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:36:11.523032 systemd[1]: Stopped target network.target - Network. Jun 25 14:36:11.528449 ignition[932]: INFO : Ignition 2.15.0 Jun 25 14:36:11.528449 ignition[932]: INFO : Stage: umount Jun 25 14:36:11.528449 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:36:11.528449 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 14:36:11.528449 ignition[932]: INFO : umount: umount passed Jun 25 14:36:11.528449 ignition[932]: INFO : Ignition finished successfully Jun 25 14:36:11.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.526151 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 14:36:11.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.526188 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:36:11.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.527999 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 14:36:11.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.529378 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 14:36:11.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.531218 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 14:36:11.531747 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 14:36:11.545000 audit: BPF prog-id=6 op=UNLOAD Jun 25 14:36:11.531844 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 14:36:11.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.533616 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 14:36:11.533702 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 14:36:11.535696 systemd-networkd[700]: eth0: DHCPv6 lease lost Jun 25 14:36:11.536073 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 14:36:11.536167 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 14:36:11.537687 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 14:36:11.537735 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 14:36:11.538770 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 14:36:11.538807 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 14:36:11.555000 audit: BPF prog-id=9 op=UNLOAD Jun 25 14:36:11.540165 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 14:36:11.540204 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 14:36:11.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.541615 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 14:36:11.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.541656 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:36:11.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.544635 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 14:36:11.545061 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 14:36:11.545171 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 14:36:11.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.546684 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 14:36:11.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.546712 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:36:11.555735 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 14:36:11.556453 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 14:36:11.556515 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:36:11.557945 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 14:36:11.557981 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:36:11.560101 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 14:36:11.560140 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 14:36:11.561103 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:36:11.565205 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 14:36:11.565775 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 14:36:11.565893 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 14:36:11.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.566987 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 14:36:11.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.567125 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:36:11.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.575777 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 14:36:11.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.575842 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 14:36:11.576749 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 14:36:11.576777 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:36:11.578241 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 14:36:11.578282 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:36:11.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.579592 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 14:36:11.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.579626 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 14:36:11.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:11.580960 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 14:36:11.580995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:36:11.582462 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 14:36:11.582501 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 14:36:11.585068 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 14:36:11.586304 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 14:36:11.586444 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:36:11.588542 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 14:36:11.588584 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:36:11.589456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 14:36:11.589493 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:36:11.591192 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 14:36:11.591314 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 14:36:11.592308 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 14:36:11.592427 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 14:36:11.593613 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 14:36:11.603562 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 14:36:11.609779 systemd[1]: Switching root. Jun 25 14:36:11.627942 iscsid[708]: iscsid shutting down. Jun 25 14:36:11.628555 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jun 25 14:36:11.628604 systemd-journald[224]: Journal stopped Jun 25 14:36:12.249193 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 14:36:12.249240 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 14:36:12.249251 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 14:36:12.249261 kernel: SELinux: policy capability open_perms=1 Jun 25 14:36:12.249275 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 14:36:12.249284 kernel: SELinux: policy capability always_check_network=0 Jun 25 14:36:12.249294 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 14:36:12.249306 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 14:36:12.249316 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 14:36:12.249325 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 14:36:12.249336 systemd[1]: Successfully loaded SELinux policy in 33.969ms. Jun 25 14:36:12.249376 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.930ms. Jun 25 14:36:12.249388 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:36:12.249401 systemd[1]: Detected virtualization kvm. Jun 25 14:36:12.249411 systemd[1]: Detected architecture arm64. Jun 25 14:36:12.249424 systemd[1]: Detected first boot. Jun 25 14:36:12.249434 systemd[1]: Initializing machine ID from VM UUID. Jun 25 14:36:12.249444 systemd[1]: Populated /etc with preset unit settings. Jun 25 14:36:12.249455 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 14:36:12.249465 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 14:36:12.249477 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 14:36:12.249487 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 14:36:12.249497 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 14:36:12.249508 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 14:36:12.249518 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 14:36:12.249528 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 14:36:12.249538 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 14:36:12.249548 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 14:36:12.249560 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 14:36:12.249571 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 14:36:12.249581 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 14:36:12.249591 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:36:12.249602 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 14:36:12.249612 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 14:36:12.249623 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 14:36:12.249633 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 14:36:12.249645 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 14:36:12.249656 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 14:36:12.249667 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 14:36:12.249677 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:36:12.249687 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:36:12.249698 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:36:12.249708 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:36:12.249718 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 14:36:12.249729 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 14:36:12.249740 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 14:36:12.249750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:36:12.249760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:36:12.249775 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:36:12.249786 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 14:36:12.249796 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 14:36:12.249806 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 14:36:12.249823 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 14:36:12.249837 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 14:36:12.249848 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 14:36:12.249858 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 14:36:12.249868 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 14:36:12.249880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:36:12.249890 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:36:12.249900 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 14:36:12.249911 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:36:12.249921 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:36:12.249932 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:36:12.249942 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 14:36:12.249953 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:36:12.249964 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 14:36:12.249975 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 14:36:12.249985 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 14:36:12.249995 kernel: kauditd_printk_skb: 94 callbacks suppressed Jun 25 14:36:12.250006 kernel: audit: type=1131 audit(1719326172.203:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.250017 kernel: loop: module loaded Jun 25 14:36:12.250028 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 14:36:12.250038 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 14:36:12.250049 kernel: audit: type=1131 audit(1719326172.206:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.250059 kernel: fuse: init (API version 7.37) Jun 25 14:36:12.250071 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 14:36:12.250083 kernel: audit: type=1130 audit(1719326172.211:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.250093 kernel: audit: type=1131 audit(1719326172.213:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.250104 kernel: audit: type=1334 audit(1719326172.215:109): prog-id=21 op=LOAD Jun 25 14:36:12.250113 kernel: audit: type=1334 audit(1719326172.216:110): prog-id=22 op=LOAD Jun 25 14:36:12.250124 kernel: audit: type=1334 audit(1719326172.216:111): prog-id=23 op=LOAD Jun 25 14:36:12.250133 kernel: audit: type=1334 audit(1719326172.216:112): prog-id=19 op=UNLOAD Jun 25 14:36:12.250143 kernel: ACPI: bus type drm_connector registered Jun 25 14:36:12.250152 kernel: audit: type=1334 audit(1719326172.216:113): prog-id=20 op=UNLOAD Jun 25 14:36:12.250162 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:36:12.250172 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:36:12.250183 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 14:36:12.250193 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 14:36:12.250203 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:36:12.250215 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 14:36:12.250262 systemd[1]: Stopped verity-setup.service. Jun 25 14:36:12.250276 kernel: audit: type=1131 audit(1719326172.243:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.250287 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 14:36:12.250298 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 14:36:12.250308 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 14:36:12.250318 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 14:36:12.250331 systemd-journald[1035]: Journal started Jun 25 14:36:12.250443 systemd-journald[1035]: Runtime Journal (/run/log/journal/db98be183b284e049923b27ba434040f) is 6.0M, max 48.6M, 42.6M free. Jun 25 14:36:11.685000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 14:36:11.778000 audit: BPF prog-id=10 op=LOAD Jun 25 14:36:11.778000 audit: BPF prog-id=10 op=UNLOAD Jun 25 14:36:11.778000 audit: BPF prog-id=11 op=LOAD Jun 25 14:36:11.778000 audit: BPF prog-id=11 op=UNLOAD Jun 25 14:36:12.091000 audit: BPF prog-id=12 op=LOAD Jun 25 14:36:12.091000 audit: BPF prog-id=3 op=UNLOAD Jun 25 14:36:12.092000 audit: BPF prog-id=13 op=LOAD Jun 25 14:36:12.092000 audit: BPF prog-id=14 op=LOAD Jun 25 14:36:12.092000 audit: BPF prog-id=4 op=UNLOAD Jun 25 14:36:12.092000 audit: BPF prog-id=5 op=UNLOAD Jun 25 14:36:12.092000 audit: BPF prog-id=15 op=LOAD Jun 25 14:36:12.092000 audit: BPF prog-id=12 op=UNLOAD Jun 25 14:36:12.092000 audit: BPF prog-id=16 op=LOAD Jun 25 14:36:12.092000 audit: BPF prog-id=17 op=LOAD Jun 25 14:36:12.092000 audit: BPF prog-id=13 op=UNLOAD Jun 25 14:36:12.092000 audit: BPF prog-id=14 op=UNLOAD Jun 25 14:36:12.093000 audit: BPF prog-id=18 op=LOAD Jun 25 14:36:12.093000 audit: BPF prog-id=15 op=UNLOAD Jun 25 14:36:12.093000 audit: BPF prog-id=19 op=LOAD Jun 25 14:36:12.093000 audit: BPF prog-id=20 op=LOAD Jun 25 14:36:12.093000 audit: BPF prog-id=16 op=UNLOAD Jun 25 14:36:12.093000 audit: BPF prog-id=17 op=UNLOAD Jun 25 14:36:12.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.106000 audit: BPF prog-id=18 op=UNLOAD Jun 25 14:36:12.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.215000 audit: BPF prog-id=21 op=LOAD Jun 25 14:36:12.216000 audit: BPF prog-id=22 op=LOAD Jun 25 14:36:12.216000 audit: BPF prog-id=23 op=LOAD Jun 25 14:36:12.216000 audit: BPF prog-id=19 op=UNLOAD Jun 25 14:36:12.216000 audit: BPF prog-id=20 op=UNLOAD Jun 25 14:36:12.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.247000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 14:36:12.247000 audit[1035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc86cc690 a2=4000 a3=1 items=0 ppid=1 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:12.247000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 14:36:12.080001 systemd[1]: Queued start job for default target multi-user.target. Jun 25 14:36:12.080013 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 14:36:12.094734 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 14:36:12.252368 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:36:12.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.253482 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 14:36:12.254331 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 14:36:12.255335 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 14:36:12.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.256415 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:36:12.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.257479 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 14:36:12.257621 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 14:36:12.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.258633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:36:12.258775 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:36:12.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.259777 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:36:12.259915 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:36:12.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.260939 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:36:12.261077 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:36:12.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.262206 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 14:36:12.262368 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 14:36:12.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.263333 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:36:12.263482 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:36:12.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.264482 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:36:12.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.265499 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 14:36:12.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.266587 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 14:36:12.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.267791 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 14:36:12.274638 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 14:36:12.276570 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 14:36:12.277293 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 14:36:12.278886 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 14:36:12.280969 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 14:36:12.281893 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:36:12.283275 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 14:36:12.287474 systemd-journald[1035]: Time spent on flushing to /var/log/journal/db98be183b284e049923b27ba434040f is 12.253ms for 980 entries. Jun 25 14:36:12.287474 systemd-journald[1035]: System Journal (/var/log/journal/db98be183b284e049923b27ba434040f) is 8.0M, max 195.6M, 187.6M free. Jun 25 14:36:12.308599 systemd-journald[1035]: Received client request to flush runtime journal. Jun 25 14:36:12.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.284148 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:36:12.285336 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:36:12.287758 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 14:36:12.294606 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:36:12.295517 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 14:36:12.296488 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 14:36:12.297502 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 14:36:12.298475 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 14:36:12.304527 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 14:36:12.305643 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:36:12.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.310275 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 14:36:12.314548 udevadm[1066]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 14:36:12.317866 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 14:36:12.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.331549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:36:12.347239 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:36:12.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.693878 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 14:36:12.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.695000 audit: BPF prog-id=24 op=LOAD Jun 25 14:36:12.695000 audit: BPF prog-id=25 op=LOAD Jun 25 14:36:12.695000 audit: BPF prog-id=7 op=UNLOAD Jun 25 14:36:12.695000 audit: BPF prog-id=8 op=UNLOAD Jun 25 14:36:12.706673 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:36:12.723509 systemd-udevd[1071]: Using default interface naming scheme 'v252'. Jun 25 14:36:12.734923 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:36:12.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.736000 audit: BPF prog-id=26 op=LOAD Jun 25 14:36:12.750657 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:36:12.764000 audit: BPF prog-id=27 op=LOAD Jun 25 14:36:12.764000 audit: BPF prog-id=28 op=LOAD Jun 25 14:36:12.764000 audit: BPF prog-id=29 op=LOAD Jun 25 14:36:12.766367 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1085) Jun 25 14:36:12.773367 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1083) Jun 25 14:36:12.774544 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 14:36:12.778267 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 25 14:36:12.787026 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 14:36:12.809189 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 14:36:12.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.864593 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 14:36:12.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.869613 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 14:36:12.885900 systemd-networkd[1078]: lo: Link UP Jun 25 14:36:12.886150 systemd-networkd[1078]: lo: Gained carrier Jun 25 14:36:12.891350 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:36:12.891740 systemd-networkd[1078]: Enumeration completed Jun 25 14:36:12.891921 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:36:12.892045 systemd-networkd[1078]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:36:12.892125 systemd-networkd[1078]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:36:12.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.893418 systemd-networkd[1078]: eth0: Link UP Jun 25 14:36:12.893493 systemd-networkd[1078]: eth0: Gained carrier Jun 25 14:36:12.893602 systemd-networkd[1078]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:36:12.905610 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 14:36:12.918473 systemd-networkd[1078]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 14:36:12.924236 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 14:36:12.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.925307 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:36:12.936574 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 14:36:12.940069 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:36:12.968298 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 14:36:12.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:12.969432 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:36:12.970334 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 14:36:12.970395 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:36:12.971233 systemd[1]: Reached target machines.target - Containers. Jun 25 14:36:12.986725 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 14:36:12.987874 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:36:12.988000 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:36:12.990319 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 14:36:12.993759 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 14:36:12.997005 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 14:36:12.999211 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 14:36:13.001135 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1109 (bootctl) Jun 25 14:36:13.003162 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 14:36:13.006390 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 14:36:13.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.008381 kernel: loop0: detected capacity change from 0 to 59648 Jun 25 14:36:13.025361 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 14:36:13.065530 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 14:36:13.066281 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 14:36:13.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.082399 kernel: loop1: detected capacity change from 0 to 113264 Jun 25 14:36:13.082960 systemd-fsck[1120]: fsck.fat 4.2 (2021-01-31) Jun 25 14:36:13.082960 systemd-fsck[1120]: /dev/vda1: 242 files, 114659/258078 clusters Jun 25 14:36:13.085713 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 14:36:13.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.092467 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 14:36:13.100554 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 14:36:13.111362 kernel: loop2: detected capacity change from 0 to 194096 Jun 25 14:36:13.111324 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 14:36:13.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.147372 kernel: loop3: detected capacity change from 0 to 59648 Jun 25 14:36:13.156370 kernel: loop4: detected capacity change from 0 to 113264 Jun 25 14:36:13.161356 kernel: loop5: detected capacity change from 0 to 194096 Jun 25 14:36:13.166064 (sd-sysext)[1124]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 14:36:13.166482 (sd-sysext)[1124]: Merged extensions into '/usr'. Jun 25 14:36:13.168041 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 14:36:13.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.180525 systemd[1]: Starting ensure-sysext.service... Jun 25 14:36:13.182572 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:36:13.189247 systemd[1]: Reloading. Jun 25 14:36:13.194842 systemd-tmpfiles[1126]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 14:36:13.195685 systemd-tmpfiles[1126]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 14:36:13.195946 systemd-tmpfiles[1126]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 14:36:13.196672 systemd-tmpfiles[1126]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 14:36:13.232758 ldconfig[1108]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 14:36:13.311884 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:36:13.356000 audit: BPF prog-id=30 op=LOAD Jun 25 14:36:13.356000 audit: BPF prog-id=26 op=UNLOAD Jun 25 14:36:13.357000 audit: BPF prog-id=31 op=LOAD Jun 25 14:36:13.357000 audit: BPF prog-id=27 op=UNLOAD Jun 25 14:36:13.357000 audit: BPF prog-id=32 op=LOAD Jun 25 14:36:13.357000 audit: BPF prog-id=33 op=LOAD Jun 25 14:36:13.357000 audit: BPF prog-id=28 op=UNLOAD Jun 25 14:36:13.357000 audit: BPF prog-id=29 op=UNLOAD Jun 25 14:36:13.360000 audit: BPF prog-id=34 op=LOAD Jun 25 14:36:13.360000 audit: BPF prog-id=35 op=LOAD Jun 25 14:36:13.360000 audit: BPF prog-id=24 op=UNLOAD Jun 25 14:36:13.360000 audit: BPF prog-id=25 op=UNLOAD Jun 25 14:36:13.360000 audit: BPF prog-id=36 op=LOAD Jun 25 14:36:13.360000 audit: BPF prog-id=21 op=UNLOAD Jun 25 14:36:13.361000 audit: BPF prog-id=37 op=LOAD Jun 25 14:36:13.361000 audit: BPF prog-id=38 op=LOAD Jun 25 14:36:13.361000 audit: BPF prog-id=22 op=UNLOAD Jun 25 14:36:13.361000 audit: BPF prog-id=23 op=UNLOAD Jun 25 14:36:13.364056 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 14:36:13.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.366156 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:36:13.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.369833 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:36:13.372171 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 14:36:13.374479 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 14:36:13.376000 audit: BPF prog-id=39 op=LOAD Jun 25 14:36:13.378124 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:36:13.379000 audit: BPF prog-id=40 op=LOAD Jun 25 14:36:13.380771 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 14:36:13.383058 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 14:36:13.387000 audit[1191]: SYSTEM_BOOT pid=1191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.390916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:36:13.392639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:36:13.395089 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:36:13.397605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:36:13.398495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:36:13.398676 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:36:13.399804 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:36:13.399954 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:36:13.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.401448 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:36:13.401573 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:36:13.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.402946 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:36:13.403065 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:36:13.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.404627 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:36:13.404763 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:36:13.405847 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 14:36:13.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.407230 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 14:36:13.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.409582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:36:13.417497 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:36:13.420835 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:36:13.423133 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:36:13.423933 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:36:13.424064 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:36:13.425673 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 14:36:13.427448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:36:13.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.427593 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:36:13.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.428780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:36:13.428907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:36:13.430104 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:36:13.430206 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:36:13.431363 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:36:13.431460 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:36:13.433545 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:36:13.441922 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:36:13.447601 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:36:13.450119 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:36:13.455883 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:36:13.457005 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:36:13.457155 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:36:13.458152 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 14:36:13.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.459495 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 14:36:13.912610 systemd-timesyncd[1189]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 14:36:13.912653 systemd-timesyncd[1189]: Initial clock synchronization to Tue 2024-06-25 14:36:13.912523 UTC. Jun 25 14:36:13.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.913758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:36:13.913929 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:36:13.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.915232 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:36:13.915345 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:36:13.916500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:36:13.916602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:36:13.917886 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:36:13.918035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:36:13.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.918261 systemd-resolved[1184]: Positive Trust Anchors: Jun 25 14:36:13.918267 systemd-resolved[1184]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:36:13.918293 systemd-resolved[1184]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:36:13.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.919532 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 14:36:13.920619 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:36:13.920709 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:36:13.922244 systemd[1]: Finished ensure-sysext.service. Jun 25 14:36:13.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.924073 systemd-resolved[1184]: Defaulting to hostname 'linux'. Jun 25 14:36:13.935406 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 14:36:13.936349 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 14:36:13.936889 augenrules[1214]: No rules Jun 25 14:36:13.937806 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:36:13.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:13.935000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 14:36:13.935000 audit[1214]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcc4a45c0 a2=420 a3=0 items=0 ppid=1180 pid=1214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:13.935000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 14:36:13.939250 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:36:13.940115 systemd[1]: Reached target network.target - Network. Jun 25 14:36:13.940757 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:36:13.941544 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:36:13.942508 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 14:36:13.943347 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 14:36:13.944286 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 14:36:13.945226 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 14:36:13.946042 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 14:36:13.946827 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 14:36:13.946857 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:36:13.947666 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:36:13.948850 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 14:36:13.950839 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 14:36:13.958838 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 14:36:13.959762 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:36:13.960270 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 14:36:13.961129 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:36:13.961795 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:36:13.962505 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:36:13.962535 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:36:13.963750 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 14:36:13.966036 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 14:36:13.968082 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 14:36:13.970413 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 14:36:13.971319 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 14:36:13.972348 jq[1223]: false Jun 25 14:36:13.972734 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 14:36:13.974664 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 14:36:13.977504 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 14:36:13.979950 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 14:36:13.986070 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 14:36:13.988018 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:36:13.988090 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 14:36:13.988613 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 14:36:13.989550 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 14:36:13.991690 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 14:36:13.992101 extend-filesystems[1224]: Found loop3 Jun 25 14:36:13.992101 extend-filesystems[1224]: Found loop4 Jun 25 14:36:13.992101 extend-filesystems[1224]: Found loop5 Jun 25 14:36:13.995115 extend-filesystems[1224]: Found vda Jun 25 14:36:13.995115 extend-filesystems[1224]: Found vda1 Jun 25 14:36:13.995115 extend-filesystems[1224]: Found vda2 Jun 25 14:36:13.995115 extend-filesystems[1224]: Found vda3 Jun 25 14:36:13.995115 extend-filesystems[1224]: Found usr Jun 25 14:36:13.995115 extend-filesystems[1224]: Found vda4 Jun 25 14:36:13.995115 extend-filesystems[1224]: Found vda6 Jun 25 14:36:13.995115 extend-filesystems[1224]: Found vda7 Jun 25 14:36:13.995115 extend-filesystems[1224]: Found vda9 Jun 25 14:36:13.995115 extend-filesystems[1224]: Checking size of /dev/vda9 Jun 25 14:36:13.994298 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 14:36:14.011698 jq[1240]: true Jun 25 14:36:13.994523 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 14:36:13.995858 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 14:36:13.996699 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 14:36:14.002298 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 14:36:14.002475 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 14:36:14.018695 jq[1244]: true Jun 25 14:36:14.025094 tar[1243]: linux-arm64/helm Jun 25 14:36:14.030288 dbus-daemon[1222]: [system] SELinux support is enabled Jun 25 14:36:14.030535 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 14:36:14.033089 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 14:36:14.033111 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 14:36:14.033971 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 14:36:14.034000 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 14:36:14.043275 systemd-logind[1235]: Watching system buttons on /dev/input/event0 (Power Button) Jun 25 14:36:14.045945 systemd-logind[1235]: New seat seat0. Jun 25 14:36:14.047421 extend-filesystems[1224]: Resized partition /dev/vda9 Jun 25 14:36:14.054259 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1082) Jun 25 14:36:14.055409 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 14:36:14.067836 extend-filesystems[1260]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 14:36:14.071997 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 14:36:14.114767 update_engine[1238]: I0625 14:36:14.114588 1238 main.cc:92] Flatcar Update Engine starting Jun 25 14:36:14.127380 update_engine[1238]: I0625 14:36:14.117286 1238 update_check_scheduler.cc:74] Next update check in 7m32s Jun 25 14:36:14.117242 systemd[1]: Started update-engine.service - Update Engine. Jun 25 14:36:14.122514 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 14:36:14.138339 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 14:36:14.145692 extend-filesystems[1260]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 14:36:14.145692 extend-filesystems[1260]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 14:36:14.145692 extend-filesystems[1260]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 14:36:14.149710 extend-filesystems[1224]: Resized filesystem in /dev/vda9 Jun 25 14:36:14.151141 bash[1267]: Updated "/home/core/.ssh/authorized_keys" Jun 25 14:36:14.146603 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 14:36:14.146774 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 14:36:14.149213 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 14:36:14.150948 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 14:36:14.167600 locksmithd[1268]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 14:36:14.314469 containerd[1245]: time="2024-06-25T14:36:14.314311614Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 14:36:14.341566 containerd[1245]: time="2024-06-25T14:36:14.341506374Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 14:36:14.341686 containerd[1245]: time="2024-06-25T14:36:14.341652614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:36:14.343200 containerd[1245]: time="2024-06-25T14:36:14.343156534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:36:14.343200 containerd[1245]: time="2024-06-25T14:36:14.343190814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:36:14.343511 containerd[1245]: time="2024-06-25T14:36:14.343475734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:36:14.343511 containerd[1245]: time="2024-06-25T14:36:14.343499174Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 14:36:14.343597 containerd[1245]: time="2024-06-25T14:36:14.343581774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 14:36:14.343652 containerd[1245]: time="2024-06-25T14:36:14.343638134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:36:14.343684 containerd[1245]: time="2024-06-25T14:36:14.343652654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 14:36:14.343724 containerd[1245]: time="2024-06-25T14:36:14.343711654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:36:14.343935 containerd[1245]: time="2024-06-25T14:36:14.343909094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 14:36:14.343966 containerd[1245]: time="2024-06-25T14:36:14.343933454Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 14:36:14.343966 containerd[1245]: time="2024-06-25T14:36:14.343943814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:36:14.344094 containerd[1245]: time="2024-06-25T14:36:14.344075094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:36:14.344126 containerd[1245]: time="2024-06-25T14:36:14.344093734Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 14:36:14.344162 containerd[1245]: time="2024-06-25T14:36:14.344146934Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 14:36:14.344190 containerd[1245]: time="2024-06-25T14:36:14.344162054Z" level=info msg="metadata content store policy set" policy=shared Jun 25 14:36:14.348452 containerd[1245]: time="2024-06-25T14:36:14.348417054Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 14:36:14.348503 containerd[1245]: time="2024-06-25T14:36:14.348454774Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 14:36:14.348503 containerd[1245]: time="2024-06-25T14:36:14.348470574Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 14:36:14.348553 containerd[1245]: time="2024-06-25T14:36:14.348516654Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 14:36:14.348553 containerd[1245]: time="2024-06-25T14:36:14.348531534Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 14:36:14.348553 containerd[1245]: time="2024-06-25T14:36:14.348540934Z" level=info msg="NRI interface is disabled by configuration." Jun 25 14:36:14.348624 containerd[1245]: time="2024-06-25T14:36:14.348552774Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 14:36:14.348764 containerd[1245]: time="2024-06-25T14:36:14.348735054Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 14:36:14.348764 containerd[1245]: time="2024-06-25T14:36:14.348759494Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 14:36:14.348813 containerd[1245]: time="2024-06-25T14:36:14.348772694Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 14:36:14.348813 containerd[1245]: time="2024-06-25T14:36:14.348786334Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 14:36:14.348813 containerd[1245]: time="2024-06-25T14:36:14.348799454Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 14:36:14.348870 containerd[1245]: time="2024-06-25T14:36:14.348815294Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 14:36:14.348870 containerd[1245]: time="2024-06-25T14:36:14.348827934Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 14:36:14.348870 containerd[1245]: time="2024-06-25T14:36:14.348842534Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 14:36:14.348870 containerd[1245]: time="2024-06-25T14:36:14.348855614Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 14:36:14.348870 containerd[1245]: time="2024-06-25T14:36:14.348869174Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 14:36:14.348968 containerd[1245]: time="2024-06-25T14:36:14.348882494Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 14:36:14.348968 containerd[1245]: time="2024-06-25T14:36:14.348899934Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 14:36:14.349033 containerd[1245]: time="2024-06-25T14:36:14.349025214Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 14:36:14.349597 containerd[1245]: time="2024-06-25T14:36:14.349551374Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 14:36:14.349626 containerd[1245]: time="2024-06-25T14:36:14.349602694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.355403 containerd[1245]: time="2024-06-25T14:36:14.355358214Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 14:36:14.355474 containerd[1245]: time="2024-06-25T14:36:14.355449614Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 14:36:14.355768 containerd[1245]: time="2024-06-25T14:36:14.355736334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.355768 containerd[1245]: time="2024-06-25T14:36:14.355765254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.355815 containerd[1245]: time="2024-06-25T14:36:14.355779014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.355815 containerd[1245]: time="2024-06-25T14:36:14.355790174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.355815 containerd[1245]: time="2024-06-25T14:36:14.355802414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.355886 containerd[1245]: time="2024-06-25T14:36:14.355814854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.355886 containerd[1245]: time="2024-06-25T14:36:14.355828854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.355886 containerd[1245]: time="2024-06-25T14:36:14.355840494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.355886 containerd[1245]: time="2024-06-25T14:36:14.355853334Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 14:36:14.356043 containerd[1245]: time="2024-06-25T14:36:14.356015414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.356043 containerd[1245]: time="2024-06-25T14:36:14.356041054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.356100 containerd[1245]: time="2024-06-25T14:36:14.356066254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.356100 containerd[1245]: time="2024-06-25T14:36:14.356079574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.356100 containerd[1245]: time="2024-06-25T14:36:14.356091534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.356153 containerd[1245]: time="2024-06-25T14:36:14.356104934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.356153 containerd[1245]: time="2024-06-25T14:36:14.356117734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.356153 containerd[1245]: time="2024-06-25T14:36:14.356127774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 14:36:14.356424 containerd[1245]: time="2024-06-25T14:36:14.356365414Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 14:36:14.356805 containerd[1245]: time="2024-06-25T14:36:14.356425734Z" level=info msg="Connect containerd service" Jun 25 14:36:14.356805 containerd[1245]: time="2024-06-25T14:36:14.356459774Z" level=info msg="using legacy CRI server" Jun 25 14:36:14.356805 containerd[1245]: time="2024-06-25T14:36:14.356468614Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 14:36:14.356805 containerd[1245]: time="2024-06-25T14:36:14.356603614Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 14:36:14.357413 containerd[1245]: time="2024-06-25T14:36:14.357381374Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:36:14.358076 containerd[1245]: time="2024-06-25T14:36:14.358053294Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 14:36:14.358114 containerd[1245]: time="2024-06-25T14:36:14.358083054Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 14:36:14.358114 containerd[1245]: time="2024-06-25T14:36:14.358095694Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 14:36:14.358114 containerd[1245]: time="2024-06-25T14:36:14.358107574Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 14:36:14.358314 containerd[1245]: time="2024-06-25T14:36:14.358261694Z" level=info msg="Start subscribing containerd event" Jun 25 14:36:14.358404 containerd[1245]: time="2024-06-25T14:36:14.358389094Z" level=info msg="Start recovering state" Jun 25 14:36:14.358540 containerd[1245]: time="2024-06-25T14:36:14.358524734Z" level=info msg="Start event monitor" Jun 25 14:36:14.358606 containerd[1245]: time="2024-06-25T14:36:14.358593534Z" level=info msg="Start snapshots syncer" Jun 25 14:36:14.358668 containerd[1245]: time="2024-06-25T14:36:14.358640334Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 14:36:14.358703 containerd[1245]: time="2024-06-25T14:36:14.358692294Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 14:36:14.358732 containerd[1245]: time="2024-06-25T14:36:14.358649814Z" level=info msg="Start cni network conf syncer for default" Jun 25 14:36:14.358732 containerd[1245]: time="2024-06-25T14:36:14.358720574Z" level=info msg="Start streaming server" Jun 25 14:36:14.358913 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 14:36:14.360137 containerd[1245]: time="2024-06-25T14:36:14.360103454Z" level=info msg="containerd successfully booted in 0.050044s" Jun 25 14:36:14.456983 tar[1243]: linux-arm64/LICENSE Jun 25 14:36:14.457100 tar[1243]: linux-arm64/README.md Jun 25 14:36:14.468958 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 14:36:15.419151 systemd-networkd[1078]: eth0: Gained IPv6LL Jun 25 14:36:15.420854 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 14:36:15.421995 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 14:36:15.432335 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 14:36:15.434693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:36:15.436794 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 14:36:15.444529 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 14:36:15.444717 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 14:36:15.446228 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 14:36:15.454966 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 14:36:15.922944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:36:15.936350 sshd_keygen[1239]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 14:36:15.957022 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 14:36:15.967276 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 14:36:15.972477 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 14:36:15.972638 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 14:36:15.975071 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 14:36:15.983906 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 14:36:15.993392 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 14:36:15.995841 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 25 14:36:15.996996 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 14:36:15.997820 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 14:36:16.000155 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 14:36:16.007216 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 14:36:16.007381 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 14:36:16.008374 systemd[1]: Startup finished in 531ms (kernel) + 3.990s (initrd) + 3.906s (userspace) = 8.428s. Jun 25 14:36:16.479732 kubelet[1297]: E0625 14:36:16.479675 1297 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:36:16.481502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:36:16.481636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:36:20.724727 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 14:36:20.726085 systemd[1]: Started sshd@0-10.0.0.122:22-10.0.0.1:33072.service - OpenSSH per-connection server daemon (10.0.0.1:33072). Jun 25 14:36:20.785672 sshd[1320]: Accepted publickey for core from 10.0.0.1 port 33072 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:36:20.788414 sshd[1320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:36:20.796704 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 14:36:20.809335 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 14:36:20.811140 systemd-logind[1235]: New session 1 of user core. Jun 25 14:36:20.820769 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 14:36:20.822606 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 14:36:20.825519 (systemd)[1323]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:36:20.891570 systemd[1323]: Queued start job for default target default.target. Jun 25 14:36:20.900411 systemd[1323]: Reached target paths.target - Paths. Jun 25 14:36:20.900439 systemd[1323]: Reached target sockets.target - Sockets. Jun 25 14:36:20.900450 systemd[1323]: Reached target timers.target - Timers. Jun 25 14:36:20.900460 systemd[1323]: Reached target basic.target - Basic System. Jun 25 14:36:20.900519 systemd[1323]: Reached target default.target - Main User Target. Jun 25 14:36:20.900548 systemd[1323]: Startup finished in 69ms. Jun 25 14:36:20.900626 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 14:36:20.902031 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 14:36:20.965467 systemd[1]: Started sshd@1-10.0.0.122:22-10.0.0.1:33080.service - OpenSSH per-connection server daemon (10.0.0.1:33080). Jun 25 14:36:21.005325 sshd[1332]: Accepted publickey for core from 10.0.0.1 port 33080 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:36:21.006478 sshd[1332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:36:21.010502 systemd-logind[1235]: New session 2 of user core. Jun 25 14:36:21.016122 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 14:36:21.068606 sshd[1332]: pam_unix(sshd:session): session closed for user core Jun 25 14:36:21.080276 systemd[1]: sshd@1-10.0.0.122:22-10.0.0.1:33080.service: Deactivated successfully. Jun 25 14:36:21.080920 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 14:36:21.081825 systemd-logind[1235]: Session 2 logged out. Waiting for processes to exit. Jun 25 14:36:21.083057 systemd[1]: Started sshd@2-10.0.0.122:22-10.0.0.1:33090.service - OpenSSH per-connection server daemon (10.0.0.1:33090). Jun 25 14:36:21.090244 systemd-logind[1235]: Removed session 2. Jun 25 14:36:21.114868 sshd[1338]: Accepted publickey for core from 10.0.0.1 port 33090 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:36:21.116118 sshd[1338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:36:21.121112 systemd-logind[1235]: New session 3 of user core. Jun 25 14:36:21.126133 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 14:36:21.177143 sshd[1338]: pam_unix(sshd:session): session closed for user core Jun 25 14:36:21.193247 systemd[1]: sshd@2-10.0.0.122:22-10.0.0.1:33090.service: Deactivated successfully. Jun 25 14:36:21.193864 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 14:36:21.194441 systemd-logind[1235]: Session 3 logged out. Waiting for processes to exit. Jun 25 14:36:21.195682 systemd[1]: Started sshd@3-10.0.0.122:22-10.0.0.1:33096.service - OpenSSH per-connection server daemon (10.0.0.1:33096). Jun 25 14:36:21.196919 systemd-logind[1235]: Removed session 3. Jun 25 14:36:21.227240 sshd[1344]: Accepted publickey for core from 10.0.0.1 port 33096 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:36:21.228699 sshd[1344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:36:21.232657 systemd-logind[1235]: New session 4 of user core. Jun 25 14:36:21.240353 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 14:36:21.295465 sshd[1344]: pam_unix(sshd:session): session closed for user core Jun 25 14:36:21.308243 systemd[1]: sshd@3-10.0.0.122:22-10.0.0.1:33096.service: Deactivated successfully. Jun 25 14:36:21.308809 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 14:36:21.309369 systemd-logind[1235]: Session 4 logged out. Waiting for processes to exit. Jun 25 14:36:21.310584 systemd[1]: Started sshd@4-10.0.0.122:22-10.0.0.1:33102.service - OpenSSH per-connection server daemon (10.0.0.1:33102). Jun 25 14:36:21.313059 systemd-logind[1235]: Removed session 4. Jun 25 14:36:21.341207 sshd[1350]: Accepted publickey for core from 10.0.0.1 port 33102 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:36:21.342381 sshd[1350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:36:21.346155 systemd-logind[1235]: New session 5 of user core. Jun 25 14:36:21.358131 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 14:36:21.425561 sudo[1353]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 14:36:21.425796 sudo[1353]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:36:21.443160 sudo[1353]: pam_unix(sudo:session): session closed for user root Jun 25 14:36:21.448067 sshd[1350]: pam_unix(sshd:session): session closed for user core Jun 25 14:36:21.459243 systemd[1]: sshd@4-10.0.0.122:22-10.0.0.1:33102.service: Deactivated successfully. Jun 25 14:36:21.459845 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 14:36:21.460371 systemd-logind[1235]: Session 5 logged out. Waiting for processes to exit. Jun 25 14:36:21.461707 systemd[1]: Started sshd@5-10.0.0.122:22-10.0.0.1:33104.service - OpenSSH per-connection server daemon (10.0.0.1:33104). Jun 25 14:36:21.462363 systemd-logind[1235]: Removed session 5. Jun 25 14:36:21.491338 sshd[1357]: Accepted publickey for core from 10.0.0.1 port 33104 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:36:21.492520 sshd[1357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:36:21.495589 systemd-logind[1235]: New session 6 of user core. Jun 25 14:36:21.506135 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 14:36:21.558839 sudo[1361]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 14:36:21.559101 sudo[1361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:36:21.562548 sudo[1361]: pam_unix(sudo:session): session closed for user root Jun 25 14:36:21.566744 sudo[1360]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 14:36:21.566957 sudo[1360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:36:21.582314 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 14:36:21.582000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:36:21.583659 auditctl[1364]: No rules Jun 25 14:36:21.584072 kernel: kauditd_printk_skb: 99 callbacks suppressed Jun 25 14:36:21.584108 kernel: audit: type=1305 audit(1719326181.582:210): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:36:21.584354 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 14:36:21.584522 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 14:36:21.582000 audit[1364]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdac80880 a2=420 a3=0 items=0 ppid=1 pid=1364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:21.586121 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:36:21.587813 kernel: audit: type=1300 audit(1719326181.582:210): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdac80880 a2=420 a3=0 items=0 ppid=1 pid=1364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:21.587862 kernel: audit: type=1327 audit(1719326181.582:210): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:36:21.582000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:36:21.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.591093 kernel: audit: type=1131 audit(1719326181.583:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.606311 augenrules[1381]: No rules Jun 25 14:36:21.606881 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:36:21.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.608104 sudo[1360]: pam_unix(sudo:session): session closed for user root Jun 25 14:36:21.607000 audit[1360]: USER_END pid=1360 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.609624 sshd[1357]: pam_unix(sshd:session): session closed for user core Jun 25 14:36:21.611735 kernel: audit: type=1130 audit(1719326181.606:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.611785 kernel: audit: type=1106 audit(1719326181.607:213): pid=1360 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.611802 kernel: audit: type=1104 audit(1719326181.607:214): pid=1360 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.607000 audit[1360]: CRED_DISP pid=1360 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.613593 kernel: audit: type=1106 audit(1719326181.609:215): pid=1357 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:36:21.609000 audit[1357]: USER_END pid=1357 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:36:21.609000 audit[1357]: CRED_DISP pid=1357 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:36:21.618731 kernel: audit: type=1104 audit(1719326181.609:216): pid=1357 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:36:21.630200 systemd[1]: sshd@5-10.0.0.122:22-10.0.0.1:33104.service: Deactivated successfully. Jun 25 14:36:21.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.122:22-10.0.0.1:33104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.630806 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 14:36:21.631367 systemd-logind[1235]: Session 6 logged out. Waiting for processes to exit. Jun 25 14:36:21.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.122:22-10.0.0.1:33110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.632596 systemd[1]: Started sshd@6-10.0.0.122:22-10.0.0.1:33110.service - OpenSSH per-connection server daemon (10.0.0.1:33110). Jun 25 14:36:21.632988 kernel: audit: type=1131 audit(1719326181.629:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.122:22-10.0.0.1:33104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.633279 systemd-logind[1235]: Removed session 6. Jun 25 14:36:21.659000 audit[1387]: USER_ACCT pid=1387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:36:21.660956 sshd[1387]: Accepted publickey for core from 10.0.0.1 port 33110 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:36:21.660000 audit[1387]: CRED_ACQ pid=1387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:36:21.660000 audit[1387]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd03164b0 a2=3 a3=1 items=0 ppid=1 pid=1387 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:21.660000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:36:21.662067 sshd[1387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:36:21.665586 systemd-logind[1235]: New session 7 of user core. Jun 25 14:36:21.671174 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 14:36:21.673000 audit[1387]: USER_START pid=1387 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:36:21.674000 audit[1389]: CRED_ACQ pid=1389 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:36:21.721000 audit[1390]: USER_ACCT pid=1390 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.722317 sudo[1390]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 14:36:21.721000 audit[1390]: CRED_REFR pid=1390 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.722786 sudo[1390]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:36:21.723000 audit[1390]: USER_START pid=1390 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:36:21.830338 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 14:36:22.091041 dockerd[1401]: time="2024-06-25T14:36:22.090907054Z" level=info msg="Starting up" Jun 25 14:36:22.183280 dockerd[1401]: time="2024-06-25T14:36:22.183238454Z" level=info msg="Loading containers: start." Jun 25 14:36:22.241000 audit[1436]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.241000 audit[1436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff523eaa0 a2=0 a3=1 items=0 ppid=1401 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.241000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 14:36:22.242000 audit[1438]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.242000 audit[1438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffffd6045c0 a2=0 a3=1 items=0 ppid=1401 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.242000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 14:36:22.244000 audit[1440]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.244000 audit[1440]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff33332a0 a2=0 a3=1 items=0 ppid=1401 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.244000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:36:22.246000 audit[1442]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.246000 audit[1442]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc4f0b680 a2=0 a3=1 items=0 ppid=1401 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.246000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:36:22.248000 audit[1444]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.248000 audit[1444]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd8bc8520 a2=0 a3=1 items=0 ppid=1401 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.248000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 14:36:22.250000 audit[1446]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.250000 audit[1446]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcb691aa0 a2=0 a3=1 items=0 ppid=1401 pid=1446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.250000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 14:36:22.263000 audit[1448]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.263000 audit[1448]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd104dbf0 a2=0 a3=1 items=0 ppid=1401 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.263000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 14:36:22.265000 audit[1450]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.265000 audit[1450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff96a4890 a2=0 a3=1 items=0 ppid=1401 pid=1450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.265000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 14:36:22.267000 audit[1452]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.267000 audit[1452]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=fffff83e3760 a2=0 a3=1 items=0 ppid=1401 pid=1452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.267000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:36:22.274000 audit[1456]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.274000 audit[1456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffed320e40 a2=0 a3=1 items=0 ppid=1401 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.274000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:36:22.275000 audit[1457]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.275000 audit[1457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffde31b710 a2=0 a3=1 items=0 ppid=1401 pid=1457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.275000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:36:22.283001 kernel: Initializing XFRM netlink socket Jun 25 14:36:22.308000 audit[1465]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.308000 audit[1465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffd57f8640 a2=0 a3=1 items=0 ppid=1401 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.308000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 14:36:22.326000 audit[1468]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.326000 audit[1468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe571cf80 a2=0 a3=1 items=0 ppid=1401 pid=1468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.326000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 14:36:22.330000 audit[1472]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.330000 audit[1472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc87d8b90 a2=0 a3=1 items=0 ppid=1401 pid=1472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.330000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 14:36:22.332000 audit[1474]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.332000 audit[1474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffdfcee150 a2=0 a3=1 items=0 ppid=1401 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.332000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 14:36:22.334000 audit[1476]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.334000 audit[1476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffe1803270 a2=0 a3=1 items=0 ppid=1401 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.334000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 14:36:22.336000 audit[1478]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.336000 audit[1478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffce17c750 a2=0 a3=1 items=0 ppid=1401 pid=1478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.336000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 14:36:22.337000 audit[1480]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.337000 audit[1480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffe9b4b750 a2=0 a3=1 items=0 ppid=1401 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.337000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 14:36:22.342000 audit[1483]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.342000 audit[1483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffe3dec2e0 a2=0 a3=1 items=0 ppid=1401 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.342000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 14:36:22.345000 audit[1485]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.345000 audit[1485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffd8661b70 a2=0 a3=1 items=0 ppid=1401 pid=1485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.345000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:36:22.346000 audit[1487]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.346000 audit[1487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc35711f0 a2=0 a3=1 items=0 ppid=1401 pid=1487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.346000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:36:22.348000 audit[1489]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.348000 audit[1489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd18eb630 a2=0 a3=1 items=0 ppid=1401 pid=1489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.348000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 14:36:22.350014 systemd-networkd[1078]: docker0: Link UP Jun 25 14:36:22.356000 audit[1493]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.356000 audit[1493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffeacf5ff0 a2=0 a3=1 items=0 ppid=1401 pid=1493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.356000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:36:22.357000 audit[1494]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:22.357000 audit[1494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd915f130 a2=0 a3=1 items=0 ppid=1401 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:22.357000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:36:22.358338 dockerd[1401]: time="2024-06-25T14:36:22.358298934Z" level=info msg="Loading containers: done." Jun 25 14:36:22.424658 dockerd[1401]: time="2024-06-25T14:36:22.424615774Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 14:36:22.425014 dockerd[1401]: time="2024-06-25T14:36:22.424970414Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 14:36:22.425190 dockerd[1401]: time="2024-06-25T14:36:22.425173814Z" level=info msg="Daemon has completed initialization" Jun 25 14:36:22.448297 dockerd[1401]: time="2024-06-25T14:36:22.448248774Z" level=info msg="API listen on /run/docker.sock" Jun 25 14:36:22.448409 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 14:36:22.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:22.942742 containerd[1245]: time="2024-06-25T14:36:22.942676814Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 14:36:23.164625 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4263773094-merged.mount: Deactivated successfully. Jun 25 14:36:23.561630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073510714.mount: Deactivated successfully. Jun 25 14:36:24.481132 containerd[1245]: time="2024-06-25T14:36:24.481086854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:24.481533 containerd[1245]: time="2024-06-25T14:36:24.481485694Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=29940432" Jun 25 14:36:24.483604 containerd[1245]: time="2024-06-25T14:36:24.483567334Z" level=info msg="ImageCreate event name:\"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:24.485510 containerd[1245]: time="2024-06-25T14:36:24.485472254Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:24.487489 containerd[1245]: time="2024-06-25T14:36:24.487456774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:24.488674 containerd[1245]: time="2024-06-25T14:36:24.488635334Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"29937230\" in 1.54589572s" Jun 25 14:36:24.488730 containerd[1245]: time="2024-06-25T14:36:24.488676534Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\"" Jun 25 14:36:24.507258 containerd[1245]: time="2024-06-25T14:36:24.507216214Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 14:36:26.443884 containerd[1245]: time="2024-06-25T14:36:26.443834414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:26.444339 containerd[1245]: time="2024-06-25T14:36:26.444290774Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=26881373" Jun 25 14:36:26.445411 containerd[1245]: time="2024-06-25T14:36:26.445372974Z" level=info msg="ImageCreate event name:\"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:26.447757 containerd[1245]: time="2024-06-25T14:36:26.447729494Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:26.449602 containerd[1245]: time="2024-06-25T14:36:26.449571574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:26.450879 containerd[1245]: time="2024-06-25T14:36:26.450844174Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"28368865\" in 1.94358248s" Jun 25 14:36:26.451014 containerd[1245]: time="2024-06-25T14:36:26.450993294Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\"" Jun 25 14:36:26.470844 containerd[1245]: time="2024-06-25T14:36:26.470788254Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 14:36:26.732414 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 14:36:26.733415 kernel: kauditd_printk_skb: 84 callbacks suppressed Jun 25 14:36:26.733458 kernel: audit: type=1130 audit(1719326186.731:252): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:26.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:26.732590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:36:26.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:26.737381 kernel: audit: type=1131 audit(1719326186.731:253): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:26.743430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:36:26.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:26.846771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:36:26.850004 kernel: audit: type=1130 audit(1719326186.846:254): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:26.939494 kubelet[1617]: E0625 14:36:26.939438 1617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:36:26.942288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:36:26.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:36:26.942428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:36:26.945022 kernel: audit: type=1131 audit(1719326186.941:255): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:36:27.666362 containerd[1245]: time="2024-06-25T14:36:27.666276014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:27.667364 containerd[1245]: time="2024-06-25T14:36:27.667309134Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=16155690" Jun 25 14:36:27.668058 containerd[1245]: time="2024-06-25T14:36:27.668017374Z" level=info msg="ImageCreate event name:\"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:27.672375 containerd[1245]: time="2024-06-25T14:36:27.672308094Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:27.674898 containerd[1245]: time="2024-06-25T14:36:27.674844694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:27.676124 containerd[1245]: time="2024-06-25T14:36:27.676059654Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"17643200\" in 1.20521548s" Jun 25 14:36:27.676124 containerd[1245]: time="2024-06-25T14:36:27.676104574Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\"" Jun 25 14:36:27.700005 containerd[1245]: time="2024-06-25T14:36:27.699947254Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 14:36:28.685567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount320299238.mount: Deactivated successfully. Jun 25 14:36:30.171078 containerd[1245]: time="2024-06-25T14:36:30.171016174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:30.172112 containerd[1245]: time="2024-06-25T14:36:30.172078294Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=25634094" Jun 25 14:36:30.173331 containerd[1245]: time="2024-06-25T14:36:30.173298414Z" level=info msg="ImageCreate event name:\"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:30.175259 containerd[1245]: time="2024-06-25T14:36:30.175231094Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:30.176734 containerd[1245]: time="2024-06-25T14:36:30.176694054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:30.177704 containerd[1245]: time="2024-06-25T14:36:30.177671734Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"25633111\" in 2.47752752s" Jun 25 14:36:30.178138 containerd[1245]: time="2024-06-25T14:36:30.178113614Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\"" Jun 25 14:36:30.199296 containerd[1245]: time="2024-06-25T14:36:30.199258534Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 14:36:30.779951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605283259.mount: Deactivated successfully. Jun 25 14:36:31.710613 containerd[1245]: time="2024-06-25T14:36:31.710561814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:31.711193 containerd[1245]: time="2024-06-25T14:36:31.711155134Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jun 25 14:36:31.712013 containerd[1245]: time="2024-06-25T14:36:31.711962094Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:31.714258 containerd[1245]: time="2024-06-25T14:36:31.714223254Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:31.716536 containerd[1245]: time="2024-06-25T14:36:31.716500774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:31.717897 containerd[1245]: time="2024-06-25T14:36:31.717766054Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.51833112s" Jun 25 14:36:31.718011 containerd[1245]: time="2024-06-25T14:36:31.717990814Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jun 25 14:36:31.736593 containerd[1245]: time="2024-06-25T14:36:31.736555654Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 14:36:32.129761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2328305841.mount: Deactivated successfully. Jun 25 14:36:32.135221 containerd[1245]: time="2024-06-25T14:36:32.135168094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:32.135852 containerd[1245]: time="2024-06-25T14:36:32.135815974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jun 25 14:36:32.136611 containerd[1245]: time="2024-06-25T14:36:32.136587334Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:32.138320 containerd[1245]: time="2024-06-25T14:36:32.138283894Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:32.140112 containerd[1245]: time="2024-06-25T14:36:32.140081534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:32.141189 containerd[1245]: time="2024-06-25T14:36:32.141152254Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 404.38804ms" Jun 25 14:36:32.141283 containerd[1245]: time="2024-06-25T14:36:32.141265214Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 14:36:32.159516 containerd[1245]: time="2024-06-25T14:36:32.159477094Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 14:36:32.671819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125818049.mount: Deactivated successfully. Jun 25 14:36:34.663346 containerd[1245]: time="2024-06-25T14:36:34.663272214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:34.663841 containerd[1245]: time="2024-06-25T14:36:34.663811774Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jun 25 14:36:34.664855 containerd[1245]: time="2024-06-25T14:36:34.664823094Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:34.666913 containerd[1245]: time="2024-06-25T14:36:34.666876774Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:34.669309 containerd[1245]: time="2024-06-25T14:36:34.669270614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:36:34.670659 containerd[1245]: time="2024-06-25T14:36:34.670618334Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.5110972s" Jun 25 14:36:34.670712 containerd[1245]: time="2024-06-25T14:36:34.670660734Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jun 25 14:36:37.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:37.193194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 14:36:37.193371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:36:37.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:37.197032 kernel: audit: type=1130 audit(1719326197.192:256): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:37.197089 kernel: audit: type=1131 audit(1719326197.192:257): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:37.203271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:36:37.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:37.291829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:36:37.295010 kernel: audit: type=1130 audit(1719326197.291:258): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:37.328375 kubelet[1833]: E0625 14:36:37.328323 1833 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:36:37.330411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:36:37.330548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:36:37.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:36:37.333003 kernel: audit: type=1131 audit(1719326197.329:259): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:36:39.553168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:36:39.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:39.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:39.556947 kernel: audit: type=1130 audit(1719326199.552:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:39.557026 kernel: audit: type=1131 audit(1719326199.552:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:39.565692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:36:39.584595 systemd[1]: Reloading. Jun 25 14:36:39.962365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:36:40.007000 audit: BPF prog-id=44 op=LOAD Jun 25 14:36:40.007000 audit: BPF prog-id=30 op=UNLOAD Jun 25 14:36:40.009346 kernel: audit: type=1334 audit(1719326200.007:262): prog-id=44 op=LOAD Jun 25 14:36:40.009473 kernel: audit: type=1334 audit(1719326200.007:263): prog-id=30 op=UNLOAD Jun 25 14:36:40.010099 kernel: audit: type=1334 audit(1719326200.008:264): prog-id=45 op=LOAD Jun 25 14:36:40.010170 kernel: audit: type=1334 audit(1719326200.008:265): prog-id=31 op=UNLOAD Jun 25 14:36:40.008000 audit: BPF prog-id=45 op=LOAD Jun 25 14:36:40.008000 audit: BPF prog-id=31 op=UNLOAD Jun 25 14:36:40.008000 audit: BPF prog-id=46 op=LOAD Jun 25 14:36:40.008000 audit: BPF prog-id=47 op=LOAD Jun 25 14:36:40.008000 audit: BPF prog-id=32 op=UNLOAD Jun 25 14:36:40.008000 audit: BPF prog-id=33 op=UNLOAD Jun 25 14:36:40.009000 audit: BPF prog-id=48 op=LOAD Jun 25 14:36:40.009000 audit: BPF prog-id=41 op=UNLOAD Jun 25 14:36:40.009000 audit: BPF prog-id=49 op=LOAD Jun 25 14:36:40.009000 audit: BPF prog-id=50 op=LOAD Jun 25 14:36:40.009000 audit: BPF prog-id=42 op=UNLOAD Jun 25 14:36:40.009000 audit: BPF prog-id=43 op=UNLOAD Jun 25 14:36:40.010000 audit: BPF prog-id=51 op=LOAD Jun 25 14:36:40.010000 audit: BPF prog-id=40 op=UNLOAD Jun 25 14:36:40.012000 audit: BPF prog-id=52 op=LOAD Jun 25 14:36:40.012000 audit: BPF prog-id=39 op=UNLOAD Jun 25 14:36:40.014000 audit: BPF prog-id=53 op=LOAD Jun 25 14:36:40.014000 audit: BPF prog-id=54 op=LOAD Jun 25 14:36:40.014000 audit: BPF prog-id=34 op=UNLOAD Jun 25 14:36:40.014000 audit: BPF prog-id=35 op=UNLOAD Jun 25 14:36:40.015000 audit: BPF prog-id=55 op=LOAD Jun 25 14:36:40.015000 audit: BPF prog-id=36 op=UNLOAD Jun 25 14:36:40.015000 audit: BPF prog-id=56 op=LOAD Jun 25 14:36:40.015000 audit: BPF prog-id=57 op=LOAD Jun 25 14:36:40.015000 audit: BPF prog-id=37 op=UNLOAD Jun 25 14:36:40.015000 audit: BPF prog-id=38 op=UNLOAD Jun 25 14:36:40.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:40.036775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:36:40.038967 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:36:40.039240 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:36:40.039608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:36:40.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:40.041381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:36:40.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:40.127867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:36:40.168392 kubelet[1908]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:36:40.168725 kubelet[1908]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:36:40.168772 kubelet[1908]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:36:40.169045 kubelet[1908]: I0625 14:36:40.169012 1908 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:36:40.978467 kubelet[1908]: I0625 14:36:40.978421 1908 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 14:36:40.978467 kubelet[1908]: I0625 14:36:40.978452 1908 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:36:40.978704 kubelet[1908]: I0625 14:36:40.978675 1908 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 14:36:41.020059 kubelet[1908]: E0625 14:36:41.020027 1908 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:41.020282 kubelet[1908]: I0625 14:36:41.020256 1908 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:36:41.029210 kubelet[1908]: I0625 14:36:41.029180 1908 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:36:41.030534 kubelet[1908]: I0625 14:36:41.030483 1908 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:36:41.030802 kubelet[1908]: I0625 14:36:41.030623 1908 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:36:41.031025 kubelet[1908]: I0625 14:36:41.031010 1908 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:36:41.031089 kubelet[1908]: I0625 14:36:41.031080 1908 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:36:41.031391 kubelet[1908]: I0625 14:36:41.031376 1908 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:36:41.034099 kubelet[1908]: I0625 14:36:41.034079 1908 kubelet.go:400] "Attempting to sync node with API server" Jun 25 14:36:41.034188 kubelet[1908]: I0625 14:36:41.034178 1908 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:36:41.034551 kubelet[1908]: I0625 14:36:41.034540 1908 kubelet.go:312] "Adding apiserver pod source" Jun 25 14:36:41.034618 kubelet[1908]: I0625 14:36:41.034608 1908 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:36:41.034861 kubelet[1908]: W0625 14:36:41.034779 1908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:41.034861 kubelet[1908]: E0625 14:36:41.034839 1908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:41.035193 kubelet[1908]: W0625 14:36:41.035099 1908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:41.035193 kubelet[1908]: E0625 14:36:41.035151 1908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:41.035903 kubelet[1908]: I0625 14:36:41.035887 1908 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:36:41.036272 kubelet[1908]: I0625 14:36:41.036247 1908 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 14:36:41.036368 kubelet[1908]: W0625 14:36:41.036358 1908 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 14:36:41.037143 kubelet[1908]: I0625 14:36:41.037116 1908 server.go:1264] "Started kubelet" Jun 25 14:36:41.037604 kubelet[1908]: I0625 14:36:41.037557 1908 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 14:36:41.037938 kubelet[1908]: I0625 14:36:41.037919 1908 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:36:41.038073 kubelet[1908]: I0625 14:36:41.038052 1908 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:36:41.039297 kubelet[1908]: I0625 14:36:41.039276 1908 server.go:455] "Adding debug handlers to kubelet server" Jun 25 14:36:41.040289 kubelet[1908]: I0625 14:36:41.040251 1908 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:36:41.042000 audit[1920]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:41.042000 audit[1920]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff30ff370 a2=0 a3=1 items=0 ppid=1908 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:41.042000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:36:41.042000 audit[1921]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:41.042000 audit[1921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2551b40 a2=0 a3=1 items=0 ppid=1908 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:41.042000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:36:41.049020 kubelet[1908]: I0625 14:36:41.048963 1908 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:36:41.049383 kubelet[1908]: I0625 14:36:41.049358 1908 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 14:36:41.049623 kubelet[1908]: I0625 14:36:41.049608 1908 reconciler.go:26] "Reconciler: start to sync state" Jun 25 14:36:41.050252 kubelet[1908]: E0625 14:36:41.050219 1908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="200ms" Jun 25 14:36:41.049000 audit[1923]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:41.049000 audit[1923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffeab5c9e0 a2=0 a3=1 items=0 ppid=1908 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:41.049000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:36:41.051494 kubelet[1908]: E0625 14:36:41.051251 1908 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.122:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.122:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17dc461209d6885e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-06-25 14:36:41.037097054 +0000 UTC m=+0.904720121,LastTimestamp:2024-06-25 14:36:41.037097054 +0000 UTC m=+0.904720121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 25 14:36:41.051000 audit[1925]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:41.051000 audit[1925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffca020e10 a2=0 a3=1 items=0 ppid=1908 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:41.051000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:36:41.052598 kubelet[1908]: W0625 14:36:41.052372 1908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:41.052598 kubelet[1908]: E0625 14:36:41.052443 1908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:41.052827 kubelet[1908]: I0625 14:36:41.052804 1908 factory.go:221] Registration of the systemd container factory successfully Jun 25 14:36:41.052928 kubelet[1908]: I0625 14:36:41.052900 1908 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 14:36:41.054022 kubelet[1908]: I0625 14:36:41.053988 1908 factory.go:221] Registration of the containerd container factory successfully Jun 25 14:36:41.058412 kubelet[1908]: E0625 14:36:41.058373 1908 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:36:41.057000 audit[1929]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:41.057000 audit[1929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc56f9a30 a2=0 a3=1 items=0 ppid=1908 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:41.057000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 14:36:41.061580 kubelet[1908]: I0625 14:36:41.061530 1908 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:36:41.061000 audit[1932]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:36:41.061000 audit[1932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd12190a0 a2=0 a3=1 items=0 ppid=1908 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:41.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:36:41.062624 kubelet[1908]: I0625 14:36:41.062602 1908 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:36:41.062658 kubelet[1908]: I0625 14:36:41.062635 1908 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:36:41.062658 kubelet[1908]: I0625 14:36:41.062653 1908 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 14:36:41.062716 kubelet[1908]: E0625 14:36:41.062696 1908 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:36:41.062000 audit[1933]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:41.062000 audit[1933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd94fc1b0 a2=0 a3=1 items=0 ppid=1908 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:41.062000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:36:41.063000 audit[1934]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:41.063000 audit[1934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc7dd1d0 a2=0 a3=1 items=0 ppid=1908 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:41.063000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:36:41.066080 kubelet[1908]: W0625 14:36:41.066027 1908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:41.066198 kubelet[1908]: E0625 14:36:41.066182 1908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:41.065000 audit[1937]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:36:41.065000 audit[1937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec83a010 a2=0 a3=1 items=0 ppid=1908 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:41.065000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:36:41.066000 audit[1940]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:36:41.066000 audit[1940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffe17ab480 a2=0 a3=1 items=0 ppid=1908 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:41.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:36:41.068340 kubelet[1908]: I0625 14:36:41.068315 1908 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:36:41.068340 kubelet[1908]: I0625 14:36:41.068335 1908 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:36:41.068432 kubelet[1908]: I0625 14:36:41.068413 1908 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:36:41.067000 audit[1938]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:36:41.067000 audit[1938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc4f16e00 a2=0 a3=1 items=0 ppid=1908 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:41.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:36:41.068000 audit[1941]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:36:41.068000 audit[1941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd6230e00 a2=0 a3=1 items=0 ppid=1908 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:41.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:36:41.129794 kubelet[1908]: I0625 14:36:41.129753 1908 policy_none.go:49] "None policy: Start" Jun 25 14:36:41.130518 kubelet[1908]: I0625 14:36:41.130499 1908 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 14:36:41.130587 kubelet[1908]: I0625 14:36:41.130531 1908 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:36:41.136460 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 14:36:41.146439 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 14:36:41.148765 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 14:36:41.150291 kubelet[1908]: I0625 14:36:41.150266 1908 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 14:36:41.150707 kubelet[1908]: E0625 14:36:41.150679 1908 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Jun 25 14:36:41.162939 kubelet[1908]: E0625 14:36:41.162896 1908 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 14:36:41.170856 kubelet[1908]: I0625 14:36:41.170833 1908 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:36:41.171355 kubelet[1908]: I0625 14:36:41.171304 1908 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 14:36:41.171518 kubelet[1908]: I0625 14:36:41.171506 1908 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:36:41.173302 kubelet[1908]: E0625 14:36:41.173275 1908 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 14:36:41.252005 kubelet[1908]: E0625 14:36:41.251854 1908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="400ms" Jun 25 14:36:41.352194 kubelet[1908]: I0625 14:36:41.352150 1908 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 14:36:41.352534 kubelet[1908]: E0625 14:36:41.352506 1908 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Jun 25 14:36:41.363788 kubelet[1908]: I0625 14:36:41.363755 1908 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 14:36:41.365173 kubelet[1908]: I0625 14:36:41.365134 1908 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 14:36:41.366129 kubelet[1908]: I0625 14:36:41.366099 1908 topology_manager.go:215] "Topology Admit Handler" podUID="b4db767ddf5e974c662d0ea97654c07c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 14:36:41.370863 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jun 25 14:36:41.383764 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jun 25 14:36:41.386954 systemd[1]: Created slice kubepods-burstable-podb4db767ddf5e974c662d0ea97654c07c.slice - libcontainer container kubepods-burstable-podb4db767ddf5e974c662d0ea97654c07c.slice. Jun 25 14:36:41.451484 kubelet[1908]: I0625 14:36:41.451440 1908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:36:41.451609 kubelet[1908]: I0625 14:36:41.451518 1908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:36:41.451609 kubelet[1908]: I0625 14:36:41.451544 1908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 14:36:41.451609 kubelet[1908]: I0625 14:36:41.451560 1908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4db767ddf5e974c662d0ea97654c07c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b4db767ddf5e974c662d0ea97654c07c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:36:41.451696 kubelet[1908]: I0625 14:36:41.451609 1908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:36:41.451696 kubelet[1908]: I0625 14:36:41.451624 1908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:36:41.451696 kubelet[1908]: I0625 14:36:41.451640 1908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:36:41.451696 kubelet[1908]: I0625 14:36:41.451681 1908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4db767ddf5e974c662d0ea97654c07c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4db767ddf5e974c662d0ea97654c07c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:36:41.451778 kubelet[1908]: I0625 14:36:41.451724 1908 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4db767ddf5e974c662d0ea97654c07c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4db767ddf5e974c662d0ea97654c07c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:36:41.652508 kubelet[1908]: E0625 14:36:41.652384 1908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="800ms" Jun 25 14:36:41.682891 kubelet[1908]: E0625 14:36:41.682849 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:41.683762 containerd[1245]: time="2024-06-25T14:36:41.683712334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jun 25 14:36:41.685907 kubelet[1908]: E0625 14:36:41.685864 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:41.686310 containerd[1245]: time="2024-06-25T14:36:41.686262654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jun 25 14:36:41.690591 kubelet[1908]: E0625 14:36:41.690566 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:41.691144 containerd[1245]: time="2024-06-25T14:36:41.690986174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b4db767ddf5e974c662d0ea97654c07c,Namespace:kube-system,Attempt:0,}" Jun 25 14:36:41.754017 kubelet[1908]: I0625 14:36:41.753937 1908 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 14:36:41.754284 kubelet[1908]: E0625 14:36:41.754259 1908 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Jun 25 14:36:42.264129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount310073387.mount: Deactivated successfully. Jun 25 14:36:42.268823 containerd[1245]: time="2024-06-25T14:36:42.268768254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:36:42.270529 containerd[1245]: time="2024-06-25T14:36:42.270478374Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jun 25 14:36:42.272294 containerd[1245]: time="2024-06-25T14:36:42.272259294Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:36:42.275763 containerd[1245]: time="2024-06-25T14:36:42.275716294Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:36:42.277211 containerd[1245]: time="2024-06-25T14:36:42.277174614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:36:42.278482 containerd[1245]: time="2024-06-25T14:36:42.278436734Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:36:42.279571 containerd[1245]: time="2024-06-25T14:36:42.279540014Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:36:42.281076 containerd[1245]: time="2024-06-25T14:36:42.281046814Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:36:42.282115 containerd[1245]: time="2024-06-25T14:36:42.282075774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:36:42.283013 containerd[1245]: time="2024-06-25T14:36:42.282953014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 599.13416ms" Jun 25 14:36:42.284448 containerd[1245]: time="2024-06-25T14:36:42.284402054Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:36:42.285430 containerd[1245]: time="2024-06-25T14:36:42.285397334Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:36:42.288681 containerd[1245]: time="2024-06-25T14:36:42.288644694Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:36:42.290021 containerd[1245]: time="2024-06-25T14:36:42.289966774Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:36:42.292013 containerd[1245]: time="2024-06-25T14:36:42.291969414Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:36:42.293087 containerd[1245]: time="2024-06-25T14:36:42.293056334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 601.98188ms" Jun 25 14:36:42.293800 containerd[1245]: time="2024-06-25T14:36:42.293775014Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:36:42.294796 containerd[1245]: time="2024-06-25T14:36:42.294760334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 608.38752ms" Jun 25 14:36:42.376262 kubelet[1908]: W0625 14:36:42.376225 1908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:42.376262 kubelet[1908]: E0625 14:36:42.376266 1908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:42.384699 kubelet[1908]: W0625 14:36:42.384635 1908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:42.384699 kubelet[1908]: E0625 14:36:42.384695 1908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:42.432266 kubelet[1908]: W0625 14:36:42.432229 1908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:42.432266 kubelet[1908]: E0625 14:36:42.432274 1908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:42.435466 containerd[1245]: time="2024-06-25T14:36:42.435348734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:36:42.435466 containerd[1245]: time="2024-06-25T14:36:42.435435574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:36:42.435466 containerd[1245]: time="2024-06-25T14:36:42.435451694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:36:42.435631 containerd[1245]: time="2024-06-25T14:36:42.435476574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:36:42.436289 containerd[1245]: time="2024-06-25T14:36:42.436005934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:36:42.436289 containerd[1245]: time="2024-06-25T14:36:42.436123574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:36:42.436289 containerd[1245]: time="2024-06-25T14:36:42.436143414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:36:42.436289 containerd[1245]: time="2024-06-25T14:36:42.436156814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:36:42.437570 containerd[1245]: time="2024-06-25T14:36:42.437478574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:36:42.437570 containerd[1245]: time="2024-06-25T14:36:42.437523854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:36:42.437721 containerd[1245]: time="2024-06-25T14:36:42.437669054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:36:42.437721 containerd[1245]: time="2024-06-25T14:36:42.437705854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:36:42.452173 systemd[1]: Started cri-containerd-ed5d8056fc882f421ac9ce3978607cfbded1507c833d31c053e1dc494a40d99d.scope - libcontainer container ed5d8056fc882f421ac9ce3978607cfbded1507c833d31c053e1dc494a40d99d. Jun 25 14:36:42.455179 systemd[1]: Started cri-containerd-6643574c0b6e5e68430c361b5e22d48918495468867be61b2240c7f74f1e539f.scope - libcontainer container 6643574c0b6e5e68430c361b5e22d48918495468867be61b2240c7f74f1e539f. Jun 25 14:36:42.456088 systemd[1]: Started cri-containerd-cd274c907a69f55e7621c58a54796e58fc322999305e9cc13cab66d08df88a29.scope - libcontainer container cd274c907a69f55e7621c58a54796e58fc322999305e9cc13cab66d08df88a29. Jun 25 14:36:42.457357 kubelet[1908]: E0625 14:36:42.457317 1908 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="1.6s" Jun 25 14:36:42.462000 audit: BPF prog-id=58 op=LOAD Jun 25 14:36:42.464147 kernel: kauditd_printk_skb: 63 callbacks suppressed Jun 25 14:36:42.464215 kernel: audit: type=1334 audit(1719326202.462:305): prog-id=58 op=LOAD Jun 25 14:36:42.463000 audit: BPF prog-id=59 op=LOAD Jun 25 14:36:42.465282 kernel: audit: type=1334 audit(1719326202.463:306): prog-id=59 op=LOAD Jun 25 14:36:42.465315 kernel: audit: type=1300 audit(1719326202.463:306): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=1971 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.463000 audit[2002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=1971 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356438303536666338383266343231616339636533393738363037 Jun 25 14:36:42.470062 kernel: audit: type=1327 audit(1719326202.463:306): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356438303536666338383266343231616339636533393738363037 Jun 25 14:36:42.470143 kernel: audit: type=1334 audit(1719326202.463:307): prog-id=60 op=LOAD Jun 25 14:36:42.463000 audit: BPF prog-id=60 op=LOAD Jun 25 14:36:42.470402 kubelet[1908]: W0625 14:36:42.470352 1908 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:42.470478 kubelet[1908]: E0625 14:36:42.470411 1908 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.122:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.122:6443: connect: connection refused Jun 25 14:36:42.463000 audit[2002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=1971 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.473057 kernel: audit: type=1300 audit(1719326202.463:307): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=1971 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.473108 kernel: audit: type=1327 audit(1719326202.463:307): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356438303536666338383266343231616339636533393738363037 Jun 25 14:36:42.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356438303536666338383266343231616339636533393738363037 Jun 25 14:36:42.463000 audit: BPF prog-id=60 op=UNLOAD Jun 25 14:36:42.476095 kernel: audit: type=1334 audit(1719326202.463:308): prog-id=60 op=UNLOAD Jun 25 14:36:42.463000 audit: BPF prog-id=59 op=UNLOAD Jun 25 14:36:42.477536 kernel: audit: type=1334 audit(1719326202.463:309): prog-id=59 op=UNLOAD Jun 25 14:36:42.477597 kernel: audit: type=1334 audit(1719326202.463:310): prog-id=61 op=LOAD Jun 25 14:36:42.463000 audit: BPF prog-id=61 op=LOAD Jun 25 14:36:42.463000 audit[2002]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=1971 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564356438303536666338383266343231616339636533393738363037 Jun 25 14:36:42.469000 audit: BPF prog-id=62 op=LOAD Jun 25 14:36:42.469000 audit: BPF prog-id=63 op=LOAD Jun 25 14:36:42.469000 audit[2003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=1972 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364323734633930376136396635356537363231633538613534373936 Jun 25 14:36:42.469000 audit: BPF prog-id=64 op=LOAD Jun 25 14:36:42.469000 audit[2003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=1972 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364323734633930376136396635356537363231633538613534373936 Jun 25 14:36:42.472000 audit: BPF prog-id=64 op=UNLOAD Jun 25 14:36:42.472000 audit: BPF prog-id=63 op=UNLOAD Jun 25 14:36:42.472000 audit: BPF prog-id=65 op=LOAD Jun 25 14:36:42.472000 audit[2003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=1972 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364323734633930376136396635356537363231633538613534373936 Jun 25 14:36:42.477000 audit: BPF prog-id=66 op=LOAD Jun 25 14:36:42.477000 audit: BPF prog-id=67 op=LOAD Jun 25 14:36:42.477000 audit[2008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=1973 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636343335373463306236653565363834333063333631623565323264 Jun 25 14:36:42.478000 audit: BPF prog-id=68 op=LOAD Jun 25 14:36:42.478000 audit[2008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=1973 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636343335373463306236653565363834333063333631623565323264 Jun 25 14:36:42.478000 audit: BPF prog-id=68 op=UNLOAD Jun 25 14:36:42.478000 audit: BPF prog-id=67 op=UNLOAD Jun 25 14:36:42.478000 audit: BPF prog-id=69 op=LOAD Jun 25 14:36:42.478000 audit[2008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=1973 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636343335373463306236653565363834333063333631623565323264 Jun 25 14:36:42.498125 containerd[1245]: time="2024-06-25T14:36:42.497942774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b4db767ddf5e974c662d0ea97654c07c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6643574c0b6e5e68430c361b5e22d48918495468867be61b2240c7f74f1e539f\"" Jun 25 14:36:42.498906 kubelet[1908]: E0625 14:36:42.498884 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:42.501359 containerd[1245]: time="2024-06-25T14:36:42.501326574Z" level=info msg="CreateContainer within sandbox \"6643574c0b6e5e68430c361b5e22d48918495468867be61b2240c7f74f1e539f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 14:36:42.503264 containerd[1245]: time="2024-06-25T14:36:42.503214294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd274c907a69f55e7621c58a54796e58fc322999305e9cc13cab66d08df88a29\"" Jun 25 14:36:42.503516 containerd[1245]: time="2024-06-25T14:36:42.503493414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed5d8056fc882f421ac9ce3978607cfbded1507c833d31c053e1dc494a40d99d\"" Jun 25 14:36:42.504455 kubelet[1908]: E0625 14:36:42.504433 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:42.504639 kubelet[1908]: E0625 14:36:42.504621 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:42.506707 containerd[1245]: time="2024-06-25T14:36:42.506674654Z" level=info msg="CreateContainer within sandbox \"ed5d8056fc882f421ac9ce3978607cfbded1507c833d31c053e1dc494a40d99d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 14:36:42.507445 containerd[1245]: time="2024-06-25T14:36:42.507395374Z" level=info msg="CreateContainer within sandbox \"cd274c907a69f55e7621c58a54796e58fc322999305e9cc13cab66d08df88a29\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 14:36:42.520787 containerd[1245]: time="2024-06-25T14:36:42.520670574Z" level=info msg="CreateContainer within sandbox \"ed5d8056fc882f421ac9ce3978607cfbded1507c833d31c053e1dc494a40d99d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2fac678a9db954fb91f0badf4d15f2ab003aec744eab9e4c2958cef72e62d13d\"" Jun 25 14:36:42.522900 containerd[1245]: time="2024-06-25T14:36:42.522866454Z" level=info msg="StartContainer for \"2fac678a9db954fb91f0badf4d15f2ab003aec744eab9e4c2958cef72e62d13d\"" Jun 25 14:36:42.527040 containerd[1245]: time="2024-06-25T14:36:42.526994414Z" level=info msg="CreateContainer within sandbox \"6643574c0b6e5e68430c361b5e22d48918495468867be61b2240c7f74f1e539f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a2ee0cf685dd151d5e82e71851ce3dbf576cb302c7924b84a293b19fdd87b31\"" Jun 25 14:36:42.527463 containerd[1245]: time="2024-06-25T14:36:42.527438094Z" level=info msg="StartContainer for \"9a2ee0cf685dd151d5e82e71851ce3dbf576cb302c7924b84a293b19fdd87b31\"" Jun 25 14:36:42.529412 containerd[1245]: time="2024-06-25T14:36:42.529377614Z" level=info msg="CreateContainer within sandbox \"cd274c907a69f55e7621c58a54796e58fc322999305e9cc13cab66d08df88a29\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7f27800bb0436ce7f5675be4e54d584a74ef48336f271c62d6cafd84e85b065b\"" Jun 25 14:36:42.529860 containerd[1245]: time="2024-06-25T14:36:42.529773494Z" level=info msg="StartContainer for \"7f27800bb0436ce7f5675be4e54d584a74ef48336f271c62d6cafd84e85b065b\"" Jun 25 14:36:42.547190 systemd[1]: Started cri-containerd-2fac678a9db954fb91f0badf4d15f2ab003aec744eab9e4c2958cef72e62d13d.scope - libcontainer container 2fac678a9db954fb91f0badf4d15f2ab003aec744eab9e4c2958cef72e62d13d. Jun 25 14:36:42.561464 kubelet[1908]: I0625 14:36:42.561428 1908 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 14:36:42.561802 kubelet[1908]: E0625 14:36:42.561741 1908 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Jun 25 14:36:42.562000 audit: BPF prog-id=70 op=LOAD Jun 25 14:36:42.563000 audit: BPF prog-id=71 op=LOAD Jun 25 14:36:42.563000 audit[2085]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=1971 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266616336373861396462393534666239316630626164663464313566 Jun 25 14:36:42.563000 audit: BPF prog-id=72 op=LOAD Jun 25 14:36:42.563000 audit[2085]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=1971 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266616336373861396462393534666239316630626164663464313566 Jun 25 14:36:42.564000 audit: BPF prog-id=72 op=UNLOAD Jun 25 14:36:42.564000 audit: BPF prog-id=71 op=UNLOAD Jun 25 14:36:42.564000 audit: BPF prog-id=73 op=LOAD Jun 25 14:36:42.564000 audit[2085]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=1971 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266616336373861396462393534666239316630626164663464313566 Jun 25 14:36:42.574160 systemd[1]: Started cri-containerd-7f27800bb0436ce7f5675be4e54d584a74ef48336f271c62d6cafd84e85b065b.scope - libcontainer container 7f27800bb0436ce7f5675be4e54d584a74ef48336f271c62d6cafd84e85b065b. Jun 25 14:36:42.575188 systemd[1]: Started cri-containerd-9a2ee0cf685dd151d5e82e71851ce3dbf576cb302c7924b84a293b19fdd87b31.scope - libcontainer container 9a2ee0cf685dd151d5e82e71851ce3dbf576cb302c7924b84a293b19fdd87b31. Jun 25 14:36:42.586000 audit: BPF prog-id=74 op=LOAD Jun 25 14:36:42.587000 audit: BPF prog-id=75 op=LOAD Jun 25 14:36:42.587000 audit[2120]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=1973 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961326565306366363835646431353164356538326537313835316365 Jun 25 14:36:42.587000 audit: BPF prog-id=76 op=LOAD Jun 25 14:36:42.587000 audit[2120]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=1973 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961326565306366363835646431353164356538326537313835316365 Jun 25 14:36:42.588957 containerd[1245]: time="2024-06-25T14:36:42.588915774Z" level=info msg="StartContainer for \"2fac678a9db954fb91f0badf4d15f2ab003aec744eab9e4c2958cef72e62d13d\" returns successfully" Jun 25 14:36:42.588000 audit: BPF prog-id=76 op=UNLOAD Jun 25 14:36:42.588000 audit: BPF prog-id=75 op=UNLOAD Jun 25 14:36:42.588000 audit: BPF prog-id=77 op=LOAD Jun 25 14:36:42.588000 audit[2120]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=1973 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961326565306366363835646431353164356538326537313835316365 Jun 25 14:36:42.597000 audit: BPF prog-id=78 op=LOAD Jun 25 14:36:42.597000 audit: BPF prog-id=79 op=LOAD Jun 25 14:36:42.597000 audit[2107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=1972 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766323738303062623034333663653766353637356265346535346435 Jun 25 14:36:42.599000 audit: BPF prog-id=80 op=LOAD Jun 25 14:36:42.599000 audit[2107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=1972 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766323738303062623034333663653766353637356265346535346435 Jun 25 14:36:42.599000 audit: BPF prog-id=80 op=UNLOAD Jun 25 14:36:42.599000 audit: BPF prog-id=79 op=UNLOAD Jun 25 14:36:42.599000 audit: BPF prog-id=81 op=LOAD Jun 25 14:36:42.599000 audit[2107]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=1972 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:36:42.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766323738303062623034333663653766353637356265346535346435 Jun 25 14:36:42.635208 containerd[1245]: time="2024-06-25T14:36:42.632188734Z" level=info msg="StartContainer for \"7f27800bb0436ce7f5675be4e54d584a74ef48336f271c62d6cafd84e85b065b\" returns successfully" Jun 25 14:36:42.635208 containerd[1245]: time="2024-06-25T14:36:42.632383654Z" level=info msg="StartContainer for \"9a2ee0cf685dd151d5e82e71851ce3dbf576cb302c7924b84a293b19fdd87b31\" returns successfully" Jun 25 14:36:43.071855 kubelet[1908]: E0625 14:36:43.071822 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:43.072502 kubelet[1908]: E0625 14:36:43.072478 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:43.073692 kubelet[1908]: E0625 14:36:43.073661 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:44.075437 kubelet[1908]: E0625 14:36:44.075397 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:44.075954 kubelet[1908]: E0625 14:36:44.075922 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:44.163403 kubelet[1908]: I0625 14:36:44.163368 1908 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 14:36:44.220000 audit[2138]: AVC avc: denied { watch } for pid=2138 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c683,c878 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:36:44.220000 audit[2138]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=42 a1=4006a36160 a2=fc6 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c683,c878 key=(null) Jun 25 14:36:44.220000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313232002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 14:36:44.220000 audit[2138]: AVC avc: denied { watch } for pid=2138 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7757 scontext=system_u:system_r:container_t:s0:c683,c878 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:36:44.220000 audit[2138]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=42 a1=4004d8b860 a2=fc6 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c683,c878 key=(null) Jun 25 14:36:44.220000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313232002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 14:36:44.220000 audit[2138]: AVC avc: denied { watch } for pid=2138 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c683,c878 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:36:44.220000 audit[2138]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=42 a1=4004d8b920 a2=fc6 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c683,c878 key=(null) Jun 25 14:36:44.220000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313232002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 14:36:44.221000 audit[2138]: AVC avc: denied { watch } for pid=2138 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c683,c878 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:36:44.221000 audit[2138]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=43 a1=4006a361c0 a2=fc6 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c683,c878 key=(null) Jun 25 14:36:44.221000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313232002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 14:36:44.221000 audit[2138]: AVC avc: denied { watch } for pid=2138 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7757 scontext=system_u:system_r:container_t:s0:c683,c878 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:36:44.221000 audit[2138]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=43 a1=40054de660 a2=fc6 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c683,c878 key=(null) Jun 25 14:36:44.221000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313232002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 14:36:44.222000 audit[2138]: AVC avc: denied { watch } for pid=2138 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c683,c878 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:36:44.222000 audit[2138]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=4b a1=4002aafc50 a2=fc6 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c683,c878 key=(null) Jun 25 14:36:44.222000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313232002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 14:36:44.274538 kubelet[1908]: E0625 14:36:44.274432 1908 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 14:36:44.315000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7757 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:36:44.315000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=7 a1=40008dc000 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:36:44.315000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:36:44.316000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:36:44.316000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=7 a1=40001141c0 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:36:44.316000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:36:44.379003 kubelet[1908]: I0625 14:36:44.378365 1908 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 14:36:45.036929 kubelet[1908]: I0625 14:36:45.036896 1908 apiserver.go:52] "Watching apiserver" Jun 25 14:36:45.050183 kubelet[1908]: I0625 14:36:45.050142 1908 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 14:36:45.859765 kubelet[1908]: E0625 14:36:45.859731 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:46.076792 kubelet[1908]: E0625 14:36:46.076749 1908 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:46.098261 systemd[1]: Reloading. Jun 25 14:36:46.228738 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:36:46.283000 audit: BPF prog-id=82 op=LOAD Jun 25 14:36:46.283000 audit: BPF prog-id=44 op=UNLOAD Jun 25 14:36:46.284000 audit: BPF prog-id=83 op=LOAD Jun 25 14:36:46.284000 audit: BPF prog-id=45 op=UNLOAD Jun 25 14:36:46.284000 audit: BPF prog-id=84 op=LOAD Jun 25 14:36:46.284000 audit: BPF prog-id=85 op=LOAD Jun 25 14:36:46.284000 audit: BPF prog-id=46 op=UNLOAD Jun 25 14:36:46.284000 audit: BPF prog-id=47 op=UNLOAD Jun 25 14:36:46.285000 audit: BPF prog-id=86 op=LOAD Jun 25 14:36:46.285000 audit: BPF prog-id=48 op=UNLOAD Jun 25 14:36:46.285000 audit: BPF prog-id=87 op=LOAD Jun 25 14:36:46.285000 audit: BPF prog-id=88 op=LOAD Jun 25 14:36:46.286000 audit: BPF prog-id=49 op=UNLOAD Jun 25 14:36:46.286000 audit: BPF prog-id=50 op=UNLOAD Jun 25 14:36:46.287000 audit: BPF prog-id=89 op=LOAD Jun 25 14:36:46.287000 audit: BPF prog-id=70 op=UNLOAD Jun 25 14:36:46.287000 audit: BPF prog-id=90 op=LOAD Jun 25 14:36:46.287000 audit: BPF prog-id=78 op=UNLOAD Jun 25 14:36:46.288000 audit: BPF prog-id=91 op=LOAD Jun 25 14:36:46.288000 audit: BPF prog-id=51 op=UNLOAD Jun 25 14:36:46.288000 audit: BPF prog-id=92 op=LOAD Jun 25 14:36:46.288000 audit: BPF prog-id=74 op=UNLOAD Jun 25 14:36:46.289000 audit: BPF prog-id=93 op=LOAD Jun 25 14:36:46.289000 audit: BPF prog-id=62 op=UNLOAD Jun 25 14:36:46.290000 audit: BPF prog-id=94 op=LOAD Jun 25 14:36:46.290000 audit: BPF prog-id=52 op=UNLOAD Jun 25 14:36:46.292000 audit: BPF prog-id=95 op=LOAD Jun 25 14:36:46.292000 audit: BPF prog-id=66 op=UNLOAD Jun 25 14:36:46.292000 audit: BPF prog-id=96 op=LOAD Jun 25 14:36:46.292000 audit: BPF prog-id=58 op=UNLOAD Jun 25 14:36:46.293000 audit: BPF prog-id=97 op=LOAD Jun 25 14:36:46.293000 audit: BPF prog-id=98 op=LOAD Jun 25 14:36:46.293000 audit: BPF prog-id=53 op=UNLOAD Jun 25 14:36:46.293000 audit: BPF prog-id=54 op=UNLOAD Jun 25 14:36:46.293000 audit: BPF prog-id=99 op=LOAD Jun 25 14:36:46.294000 audit: BPF prog-id=55 op=UNLOAD Jun 25 14:36:46.294000 audit: BPF prog-id=100 op=LOAD Jun 25 14:36:46.294000 audit: BPF prog-id=101 op=LOAD Jun 25 14:36:46.294000 audit: BPF prog-id=56 op=UNLOAD Jun 25 14:36:46.294000 audit: BPF prog-id=57 op=UNLOAD Jun 25 14:36:46.304633 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:36:46.326333 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:36:46.326548 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:36:46.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:46.326614 systemd[1]: kubelet.service: Consumed 1.296s CPU time. Jun 25 14:36:46.339510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:36:46.430173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:36:46.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:46.471108 kubelet[2249]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:36:46.471108 kubelet[2249]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:36:46.471108 kubelet[2249]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:36:46.471452 kubelet[2249]: I0625 14:36:46.471141 2249 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:36:46.475146 kubelet[2249]: I0625 14:36:46.475097 2249 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 14:36:46.475146 kubelet[2249]: I0625 14:36:46.475132 2249 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:36:46.475368 kubelet[2249]: I0625 14:36:46.475352 2249 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 14:36:46.476716 kubelet[2249]: I0625 14:36:46.476689 2249 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 14:36:46.478041 kubelet[2249]: I0625 14:36:46.478021 2249 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:36:46.486404 kubelet[2249]: I0625 14:36:46.486291 2249 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:36:46.486509 kubelet[2249]: I0625 14:36:46.486479 2249 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:36:46.486679 kubelet[2249]: I0625 14:36:46.486507 2249 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:36:46.486765 kubelet[2249]: I0625 14:36:46.486685 2249 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:36:46.486765 kubelet[2249]: I0625 14:36:46.486693 2249 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:36:46.486765 kubelet[2249]: I0625 14:36:46.486728 2249 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:36:46.486831 kubelet[2249]: I0625 14:36:46.486822 2249 kubelet.go:400] "Attempting to sync node with API server" Jun 25 14:36:46.486852 kubelet[2249]: I0625 14:36:46.486837 2249 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:36:46.486875 kubelet[2249]: I0625 14:36:46.486865 2249 kubelet.go:312] "Adding apiserver pod source" Jun 25 14:36:46.486898 kubelet[2249]: I0625 14:36:46.486881 2249 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:36:46.492462 kubelet[2249]: I0625 14:36:46.492436 2249 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:36:46.494746 kubelet[2249]: I0625 14:36:46.494719 2249 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 14:36:46.499797 kubelet[2249]: I0625 14:36:46.499751 2249 server.go:1264] "Started kubelet" Jun 25 14:36:46.503373 kubelet[2249]: I0625 14:36:46.503343 2249 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:36:46.509336 kubelet[2249]: E0625 14:36:46.509306 2249 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:36:46.510846 kubelet[2249]: I0625 14:36:46.510299 2249 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:36:46.511395 kubelet[2249]: I0625 14:36:46.511361 2249 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:36:46.511636 kubelet[2249]: I0625 14:36:46.511621 2249 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 14:36:46.511740 kubelet[2249]: I0625 14:36:46.511451 2249 server.go:455] "Adding debug handlers to kubelet server" Jun 25 14:36:46.513330 kubelet[2249]: I0625 14:36:46.513302 2249 reconciler.go:26] "Reconciler: start to sync state" Jun 25 14:36:46.513595 kubelet[2249]: I0625 14:36:46.513507 2249 factory.go:221] Registration of the systemd container factory successfully Jun 25 14:36:46.513696 kubelet[2249]: I0625 14:36:46.513670 2249 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 14:36:46.513748 kubelet[2249]: I0625 14:36:46.513693 2249 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 14:36:46.513982 kubelet[2249]: I0625 14:36:46.513943 2249 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:36:46.514932 kubelet[2249]: I0625 14:36:46.514889 2249 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:36:46.516809 kubelet[2249]: I0625 14:36:46.516762 2249 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:36:46.516874 kubelet[2249]: I0625 14:36:46.516816 2249 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:36:46.516874 kubelet[2249]: I0625 14:36:46.516827 2249 factory.go:221] Registration of the containerd container factory successfully Jun 25 14:36:46.526853 kubelet[2249]: I0625 14:36:46.516835 2249 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 14:36:46.527100 kubelet[2249]: E0625 14:36:46.527081 2249 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 14:36:46.553817 kubelet[2249]: I0625 14:36:46.553774 2249 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:36:46.553817 kubelet[2249]: I0625 14:36:46.553797 2249 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:36:46.553963 kubelet[2249]: I0625 14:36:46.553820 2249 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:36:46.554074 kubelet[2249]: I0625 14:36:46.554047 2249 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 14:36:46.554132 kubelet[2249]: I0625 14:36:46.554065 2249 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 14:36:46.554132 kubelet[2249]: I0625 14:36:46.554093 2249 policy_none.go:49] "None policy: Start" Jun 25 14:36:46.554651 kubelet[2249]: I0625 14:36:46.554619 2249 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 14:36:46.554651 kubelet[2249]: I0625 14:36:46.554645 2249 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:36:46.554801 kubelet[2249]: I0625 14:36:46.554779 2249 state_mem.go:75] "Updated machine memory state" Jun 25 14:36:46.559696 kubelet[2249]: I0625 14:36:46.559666 2249 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:36:46.559875 kubelet[2249]: I0625 14:36:46.559836 2249 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 14:36:46.560113 kubelet[2249]: I0625 14:36:46.560096 2249 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:36:46.629268 kubelet[2249]: I0625 14:36:46.627678 2249 topology_manager.go:215] "Topology Admit Handler" podUID="b4db767ddf5e974c662d0ea97654c07c" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 14:36:46.629268 kubelet[2249]: I0625 14:36:46.627800 2249 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 14:36:46.629268 kubelet[2249]: I0625 14:36:46.627870 2249 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 14:36:46.666012 kubelet[2249]: I0625 14:36:46.665959 2249 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 14:36:46.715655 kubelet[2249]: I0625 14:36:46.715610 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4db767ddf5e974c662d0ea97654c07c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b4db767ddf5e974c662d0ea97654c07c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:36:46.715655 kubelet[2249]: I0625 14:36:46.715656 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:36:46.715800 kubelet[2249]: I0625 14:36:46.715676 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:36:46.715800 kubelet[2249]: I0625 14:36:46.715694 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:36:46.715800 kubelet[2249]: I0625 14:36:46.715711 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 14:36:46.715800 kubelet[2249]: I0625 14:36:46.715726 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4db767ddf5e974c662d0ea97654c07c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4db767ddf5e974c662d0ea97654c07c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:36:46.715800 kubelet[2249]: I0625 14:36:46.715742 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4db767ddf5e974c662d0ea97654c07c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4db767ddf5e974c662d0ea97654c07c\") " pod="kube-system/kube-apiserver-localhost" Jun 25 14:36:46.715942 kubelet[2249]: I0625 14:36:46.715755 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:36:46.715942 kubelet[2249]: I0625 14:36:46.715778 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 14:36:46.933238 kubelet[2249]: E0625 14:36:46.933199 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:46.934611 kubelet[2249]: E0625 14:36:46.933556 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:46.941253 kubelet[2249]: E0625 14:36:46.941066 2249 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 14:36:46.941638 kubelet[2249]: E0625 14:36:46.941619 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:46.987443 kubelet[2249]: I0625 14:36:46.987392 2249 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jun 25 14:36:46.988112 kubelet[2249]: I0625 14:36:46.988093 2249 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 14:36:47.487424 kubelet[2249]: I0625 14:36:47.487389 2249 apiserver.go:52] "Watching apiserver" Jun 25 14:36:47.512611 kubelet[2249]: I0625 14:36:47.512576 2249 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 14:36:47.534464 kubelet[2249]: E0625 14:36:47.534438 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:47.535258 kubelet[2249]: E0625 14:36:47.535235 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:47.543800 kubelet[2249]: E0625 14:36:47.543770 2249 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 14:36:47.544851 kubelet[2249]: E0625 14:36:47.544831 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:47.561022 kubelet[2249]: I0625 14:36:47.560948 2249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.560918414 podStartE2EDuration="1.560918414s" podCreationTimestamp="2024-06-25 14:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:36:47.56034717 +0000 UTC m=+1.125534397" watchObservedRunningTime="2024-06-25 14:36:47.560918414 +0000 UTC m=+1.126105641" Jun 25 14:36:47.580682 kubelet[2249]: I0625 14:36:47.580619 2249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5806005920000001 podStartE2EDuration="1.580600592s" podCreationTimestamp="2024-06-25 14:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:36:47.572035172 +0000 UTC m=+1.137222399" watchObservedRunningTime="2024-06-25 14:36:47.580600592 +0000 UTC m=+1.145787819" Jun 25 14:36:47.591703 kubelet[2249]: I0625 14:36:47.591647 2249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.591633029 podStartE2EDuration="2.591633029s" podCreationTimestamp="2024-06-25 14:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:36:47.580809353 +0000 UTC m=+1.145996580" watchObservedRunningTime="2024-06-25 14:36:47.591633029 +0000 UTC m=+1.156820216" Jun 25 14:36:48.535504 kubelet[2249]: E0625 14:36:48.535471 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:49.536465 kubelet[2249]: E0625 14:36:49.536433 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:51.251553 kubelet[2249]: E0625 14:36:51.247582 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:51.259327 sudo[1390]: pam_unix(sudo:session): session closed for user root Jun 25 14:36:51.258000 audit[1390]: USER_END pid=1390 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:36:51.260194 kernel: kauditd_printk_skb: 128 callbacks suppressed Jun 25 14:36:51.260270 kernel: audit: type=1106 audit(1719326211.258:391): pid=1390 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:36:51.258000 audit[1390]: CRED_DISP pid=1390 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:36:51.263262 sshd[1387]: pam_unix(sshd:session): session closed for user core Jun 25 14:36:51.264561 kernel: audit: type=1104 audit(1719326211.258:392): pid=1390 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:36:51.263000 audit[1387]: USER_END pid=1387 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:36:51.266151 systemd[1]: sshd@6-10.0.0.122:22-10.0.0.1:33110.service: Deactivated successfully. Jun 25 14:36:51.267184 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 14:36:51.267373 systemd[1]: session-7.scope: Consumed 7.022s CPU time. Jun 25 14:36:51.267563 kernel: audit: type=1106 audit(1719326211.263:393): pid=1387 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:36:51.267601 kernel: audit: type=1104 audit(1719326211.263:394): pid=1387 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:36:51.263000 audit[1387]: CRED_DISP pid=1387 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:36:51.268343 systemd-logind[1235]: Session 7 logged out. Waiting for processes to exit. Jun 25 14:36:51.269342 systemd-logind[1235]: Removed session 7. Jun 25 14:36:51.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.122:22-10.0.0.1:33110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:51.271525 kernel: audit: type=1131 audit(1719326211.265:395): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.122:22-10.0.0.1:33110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:36:52.826815 kubelet[2249]: E0625 14:36:52.826770 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:53.541940 kubelet[2249]: E0625 14:36:53.541911 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:58.186000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=520978 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 14:36:58.186000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=4000c3bcc0 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:36:58.193030 kernel: audit: type=1400 audit(1719326218.186:396): avc: denied { watch } for pid=2116 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=520978 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 14:36:58.193122 kernel: audit: type=1300 audit(1719326218.186:396): arch=c00000b7 syscall=27 success=no exit=-13 a0=9 a1=4000c3bcc0 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:36:58.193156 kernel: audit: type=1327 audit(1719326218.186:396): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:36:58.186000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:36:58.901851 kubelet[2249]: E0625 14:36:58.901813 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:36:59.581083 update_engine[1238]: I0625 14:36:59.581025 1238 update_attempter.cc:509] Updating boot flags... Jun 25 14:36:59.599014 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2343) Jun 25 14:36:59.636009 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2341) Jun 25 14:37:00.636000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:00.636000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:00.642747 kernel: audit: type=1400 audit(1719326220.636:397): avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:00.642798 kernel: audit: type=1400 audit(1719326220.636:398): avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:00.642833 kernel: audit: type=1300 audit(1719326220.636:398): arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4001bd9640 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:37:00.636000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4001bd9640 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:37:00.636000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:37:00.648852 kernel: audit: type=1327 audit(1719326220.636:398): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:37:00.648904 kernel: audit: type=1300 audit(1719326220.636:397): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001cc62c0 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:37:00.636000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001cc62c0 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:37:00.636000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:37:00.655561 kernel: audit: type=1327 audit(1719326220.636:397): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:37:00.655627 kernel: audit: type=1400 audit(1719326220.636:399): avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:00.636000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:00.636000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4001bd9680 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:37:00.636000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:37:00.637000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:00.637000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001c15b60 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:37:00.637000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:37:00.919531 kubelet[2249]: I0625 14:37:00.919423 2249 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 14:37:00.919830 containerd[1245]: time="2024-06-25T14:37:00.919751985Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 14:37:00.920022 kubelet[2249]: I0625 14:37:00.919911 2249 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 14:37:01.254755 kubelet[2249]: I0625 14:37:01.254632 2249 topology_manager.go:215] "Topology Admit Handler" podUID="9139c5a3-59d0-4717-86ab-d473b1bcc191" podNamespace="kube-system" podName="kube-proxy-2v89f" Jun 25 14:37:01.261835 kubelet[2249]: E0625 14:37:01.261802 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:01.264567 systemd[1]: Created slice kubepods-besteffort-pod9139c5a3_59d0_4717_86ab_d473b1bcc191.slice - libcontainer container kubepods-besteffort-pod9139c5a3_59d0_4717_86ab_d473b1bcc191.slice. Jun 25 14:37:01.319212 kubelet[2249]: I0625 14:37:01.319175 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9139c5a3-59d0-4717-86ab-d473b1bcc191-kube-proxy\") pod \"kube-proxy-2v89f\" (UID: \"9139c5a3-59d0-4717-86ab-d473b1bcc191\") " pod="kube-system/kube-proxy-2v89f" Jun 25 14:37:01.319458 kubelet[2249]: I0625 14:37:01.319439 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9139c5a3-59d0-4717-86ab-d473b1bcc191-xtables-lock\") pod \"kube-proxy-2v89f\" (UID: \"9139c5a3-59d0-4717-86ab-d473b1bcc191\") " pod="kube-system/kube-proxy-2v89f" Jun 25 14:37:01.319546 kubelet[2249]: I0625 14:37:01.319530 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9139c5a3-59d0-4717-86ab-d473b1bcc191-lib-modules\") pod \"kube-proxy-2v89f\" (UID: \"9139c5a3-59d0-4717-86ab-d473b1bcc191\") " pod="kube-system/kube-proxy-2v89f" Jun 25 14:37:01.319630 kubelet[2249]: I0625 14:37:01.319616 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9bcc\" (UniqueName: \"kubernetes.io/projected/9139c5a3-59d0-4717-86ab-d473b1bcc191-kube-api-access-g9bcc\") pod \"kube-proxy-2v89f\" (UID: \"9139c5a3-59d0-4717-86ab-d473b1bcc191\") " pod="kube-system/kube-proxy-2v89f" Jun 25 14:37:01.428106 kubelet[2249]: E0625 14:37:01.428066 2249 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 25 14:37:01.428106 kubelet[2249]: E0625 14:37:01.428099 2249 projected.go:200] Error preparing data for projected volume kube-api-access-g9bcc for pod kube-system/kube-proxy-2v89f: configmap "kube-root-ca.crt" not found Jun 25 14:37:01.428298 kubelet[2249]: E0625 14:37:01.428159 2249 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9139c5a3-59d0-4717-86ab-d473b1bcc191-kube-api-access-g9bcc podName:9139c5a3-59d0-4717-86ab-d473b1bcc191 nodeName:}" failed. No retries permitted until 2024-06-25 14:37:01.928135083 +0000 UTC m=+15.493322310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g9bcc" (UniqueName: "kubernetes.io/projected/9139c5a3-59d0-4717-86ab-d473b1bcc191-kube-api-access-g9bcc") pod "kube-proxy-2v89f" (UID: "9139c5a3-59d0-4717-86ab-d473b1bcc191") : configmap "kube-root-ca.crt" not found Jun 25 14:37:02.016963 kubelet[2249]: I0625 14:37:02.016815 2249 topology_manager.go:215] "Topology Admit Handler" podUID="8b912997-36f0-47dc-9cd5-fa2292f4524e" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-dm5rq" Jun 25 14:37:02.033482 systemd[1]: Created slice kubepods-besteffort-pod8b912997_36f0_47dc_9cd5_fa2292f4524e.slice - libcontainer container kubepods-besteffort-pod8b912997_36f0_47dc_9cd5_fa2292f4524e.slice. Jun 25 14:37:02.124574 kubelet[2249]: I0625 14:37:02.124539 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8b912997-36f0-47dc-9cd5-fa2292f4524e-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-dm5rq\" (UID: \"8b912997-36f0-47dc-9cd5-fa2292f4524e\") " pod="tigera-operator/tigera-operator-76ff79f7fd-dm5rq" Jun 25 14:37:02.124574 kubelet[2249]: I0625 14:37:02.124578 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfg2p\" (UniqueName: \"kubernetes.io/projected/8b912997-36f0-47dc-9cd5-fa2292f4524e-kube-api-access-qfg2p\") pod \"tigera-operator-76ff79f7fd-dm5rq\" (UID: \"8b912997-36f0-47dc-9cd5-fa2292f4524e\") " pod="tigera-operator/tigera-operator-76ff79f7fd-dm5rq" Jun 25 14:37:02.178391 kubelet[2249]: E0625 14:37:02.178354 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:02.179108 containerd[1245]: time="2024-06-25T14:37:02.179071342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2v89f,Uid:9139c5a3-59d0-4717-86ab-d473b1bcc191,Namespace:kube-system,Attempt:0,}" Jun 25 14:37:02.201749 containerd[1245]: time="2024-06-25T14:37:02.201677682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:37:02.201749 containerd[1245]: time="2024-06-25T14:37:02.201722562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:02.201749 containerd[1245]: time="2024-06-25T14:37:02.201738563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:37:02.201749 containerd[1245]: time="2024-06-25T14:37:02.201750843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:02.221161 systemd[1]: Started cri-containerd-f37a2a8a7a728c9a730af4f1743460432b2bd224bef7c200ec7ca6e62c38c4b5.scope - libcontainer container f37a2a8a7a728c9a730af4f1743460432b2bd224bef7c200ec7ca6e62c38c4b5. Jun 25 14:37:02.236000 audit: BPF prog-id=102 op=LOAD Jun 25 14:37:02.240000 audit: BPF prog-id=103 op=LOAD Jun 25 14:37:02.240000 audit[2370]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=2359 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633376132613861376137323863396137333061663466313734333436 Jun 25 14:37:02.240000 audit: BPF prog-id=104 op=LOAD Jun 25 14:37:02.240000 audit[2370]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=2359 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633376132613861376137323863396137333061663466313734333436 Jun 25 14:37:02.240000 audit: BPF prog-id=104 op=UNLOAD Jun 25 14:37:02.240000 audit: BPF prog-id=103 op=UNLOAD Jun 25 14:37:02.240000 audit: BPF prog-id=105 op=LOAD Jun 25 14:37:02.240000 audit[2370]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=2359 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633376132613861376137323863396137333061663466313734333436 Jun 25 14:37:02.255211 containerd[1245]: time="2024-06-25T14:37:02.255165745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2v89f,Uid:9139c5a3-59d0-4717-86ab-d473b1bcc191,Namespace:kube-system,Attempt:0,} returns sandbox id \"f37a2a8a7a728c9a730af4f1743460432b2bd224bef7c200ec7ca6e62c38c4b5\"" Jun 25 14:37:02.255865 kubelet[2249]: E0625 14:37:02.255843 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:02.260356 containerd[1245]: time="2024-06-25T14:37:02.260284638Z" level=info msg="CreateContainer within sandbox \"f37a2a8a7a728c9a730af4f1743460432b2bd224bef7c200ec7ca6e62c38c4b5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 14:37:02.274175 containerd[1245]: time="2024-06-25T14:37:02.273482313Z" level=info msg="CreateContainer within sandbox \"f37a2a8a7a728c9a730af4f1743460432b2bd224bef7c200ec7ca6e62c38c4b5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e0c8fdec5a5121707483131425256c677f0296ddadd178b2e1f6fecaa2033185\"" Jun 25 14:37:02.275168 containerd[1245]: time="2024-06-25T14:37:02.274683397Z" level=info msg="StartContainer for \"e0c8fdec5a5121707483131425256c677f0296ddadd178b2e1f6fecaa2033185\"" Jun 25 14:37:02.303149 systemd[1]: Started cri-containerd-e0c8fdec5a5121707483131425256c677f0296ddadd178b2e1f6fecaa2033185.scope - libcontainer container e0c8fdec5a5121707483131425256c677f0296ddadd178b2e1f6fecaa2033185. Jun 25 14:37:02.313000 audit: BPF prog-id=106 op=LOAD Jun 25 14:37:02.313000 audit[2401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2359 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633866646563356135313231373037343833313331343235323536 Jun 25 14:37:02.313000 audit: BPF prog-id=107 op=LOAD Jun 25 14:37:02.313000 audit[2401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2359 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633866646563356135313231373037343833313331343235323536 Jun 25 14:37:02.313000 audit: BPF prog-id=107 op=UNLOAD Jun 25 14:37:02.313000 audit: BPF prog-id=106 op=UNLOAD Jun 25 14:37:02.313000 audit: BPF prog-id=108 op=LOAD Jun 25 14:37:02.313000 audit[2401]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2359 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530633866646563356135313231373037343833313331343235323536 Jun 25 14:37:02.325103 containerd[1245]: time="2024-06-25T14:37:02.325060131Z" level=info msg="StartContainer for \"e0c8fdec5a5121707483131425256c677f0296ddadd178b2e1f6fecaa2033185\" returns successfully" Jun 25 14:37:02.337537 containerd[1245]: time="2024-06-25T14:37:02.337491524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-dm5rq,Uid:8b912997-36f0-47dc-9cd5-fa2292f4524e,Namespace:tigera-operator,Attempt:0,}" Jun 25 14:37:02.357161 containerd[1245]: time="2024-06-25T14:37:02.357023536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:37:02.357376 containerd[1245]: time="2024-06-25T14:37:02.357329336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:02.357495 containerd[1245]: time="2024-06-25T14:37:02.357454217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:37:02.357495 containerd[1245]: time="2024-06-25T14:37:02.357480457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:02.386460 systemd[1]: Started cri-containerd-7fdf101b1fe6a527d3c6edc2c4d8d5a7d4861742062c3462afc371211aa89a26.scope - libcontainer container 7fdf101b1fe6a527d3c6edc2c4d8d5a7d4861742062c3462afc371211aa89a26. Jun 25 14:37:02.396000 audit: BPF prog-id=109 op=LOAD Jun 25 14:37:02.399000 audit: BPF prog-id=110 op=LOAD Jun 25 14:37:02.399000 audit[2445]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001b18b0 a2=78 a3=0 items=0 ppid=2435 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766646631303162316665366135323764336336656463326334643864 Jun 25 14:37:02.399000 audit: BPF prog-id=111 op=LOAD Jun 25 14:37:02.399000 audit[2445]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001b1640 a2=78 a3=0 items=0 ppid=2435 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766646631303162316665366135323764336336656463326334643864 Jun 25 14:37:02.399000 audit: BPF prog-id=111 op=UNLOAD Jun 25 14:37:02.399000 audit: BPF prog-id=110 op=UNLOAD Jun 25 14:37:02.399000 audit: BPF prog-id=112 op=LOAD Jun 25 14:37:02.399000 audit[2445]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001b1b10 a2=78 a3=0 items=0 ppid=2435 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766646631303162316665366135323764336336656463326334643864 Jun 25 14:37:02.427189 containerd[1245]: time="2024-06-25T14:37:02.427138522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-dm5rq,Uid:8b912997-36f0-47dc-9cd5-fa2292f4524e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7fdf101b1fe6a527d3c6edc2c4d8d5a7d4861742062c3462afc371211aa89a26\"" Jun 25 14:37:02.436056 containerd[1245]: time="2024-06-25T14:37:02.436006586Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 14:37:02.504000 audit[2494]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.504000 audit[2494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1c7ade0 a2=0 a3=1 items=0 ppid=2410 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:37:02.505000 audit[2495]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.505000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf9561f0 a2=0 a3=1 items=0 ppid=2410 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.505000 audit[2496]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.505000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcee0a1c0 a2=0 a3=1 items=0 ppid=2410 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:37:02.505000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:37:02.506000 audit[2497]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.506000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffde115e10 a2=0 a3=1 items=0 ppid=2410 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.506000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:37:02.506000 audit[2498]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.506000 audit[2498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb127420 a2=0 a3=1 items=0 ppid=2410 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:37:02.507000 audit[2499]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.507000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff19815f0 a2=0 a3=1 items=0 ppid=2410 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:37:02.558161 kubelet[2249]: E0625 14:37:02.558039 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:02.568522 kubelet[2249]: I0625 14:37:02.568458 2249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2v89f" podStartSLOduration=1.568440898 podStartE2EDuration="1.568440898s" podCreationTimestamp="2024-06-25 14:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:37:02.568316458 +0000 UTC m=+16.133503685" watchObservedRunningTime="2024-06-25 14:37:02.568440898 +0000 UTC m=+16.133628125" Jun 25 14:37:02.609000 audit[2500]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.609000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffebe667c0 a2=0 a3=1 items=0 ppid=2410 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.609000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:37:02.614000 audit[2502]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.614000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd788bc20 a2=0 a3=1 items=0 ppid=2410 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.614000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 14:37:02.618000 audit[2505]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.618000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd620aa40 a2=0 a3=1 items=0 ppid=2410 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.618000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 14:37:02.619000 audit[2506]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.619000 audit[2506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9b3fba0 a2=0 a3=1 items=0 ppid=2410 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:37:02.623000 audit[2508]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.623000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe1aaf790 a2=0 a3=1 items=0 ppid=2410 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:37:02.625000 audit[2509]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.625000 audit[2509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdfbac8f0 a2=0 a3=1 items=0 ppid=2410 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.625000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:37:02.627000 audit[2511]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.627000 audit[2511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff4438520 a2=0 a3=1 items=0 ppid=2410 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.627000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:37:02.631000 audit[2514]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.631000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe507c620 a2=0 a3=1 items=0 ppid=2410 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.631000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 14:37:02.633000 audit[2515]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.633000 audit[2515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3b405f0 a2=0 a3=1 items=0 ppid=2410 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:37:02.635000 audit[2517]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.635000 audit[2517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdcaeff80 a2=0 a3=1 items=0 ppid=2410 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.635000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:37:02.637000 audit[2518]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.637000 audit[2518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeb1527c0 a2=0 a3=1 items=0 ppid=2410 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.637000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:37:02.639000 audit[2520]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.639000 audit[2520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe35678e0 a2=0 a3=1 items=0 ppid=2410 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.639000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:37:02.643000 audit[2523]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.643000 audit[2523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd6836570 a2=0 a3=1 items=0 ppid=2410 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.643000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:37:02.647000 audit[2526]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.647000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd2392320 a2=0 a3=1 items=0 ppid=2410 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:37:02.648000 audit[2527]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.648000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc5347f70 a2=0 a3=1 items=0 ppid=2410 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.648000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:37:02.650000 audit[2529]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.650000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff63893c0 a2=0 a3=1 items=0 ppid=2410 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.650000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:37:02.654000 audit[2532]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.654000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcc967d80 a2=0 a3=1 items=0 ppid=2410 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:37:02.655000 audit[2533]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.655000 audit[2533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce5221e0 a2=0 a3=1 items=0 ppid=2410 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.655000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:37:02.662000 audit[2535]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:37:02.662000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffffb34ee50 a2=0 a3=1 items=0 ppid=2410 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.662000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:37:02.682000 audit[2541]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:02.682000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffe646ff40 a2=0 a3=1 items=0 ppid=2410 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.682000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:02.698000 audit[2541]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:02.698000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffe646ff40 a2=0 a3=1 items=0 ppid=2410 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.698000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:02.699000 audit[2546]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.699000 audit[2546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcf540200 a2=0 a3=1 items=0 ppid=2410 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.699000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:37:02.702000 audit[2548]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.702000 audit[2548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe9ca84f0 a2=0 a3=1 items=0 ppid=2410 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.702000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 14:37:02.706000 audit[2551]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.706000 audit[2551]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcf2021e0 a2=0 a3=1 items=0 ppid=2410 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.706000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 14:37:02.707000 audit[2552]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.707000 audit[2552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe885a940 a2=0 a3=1 items=0 ppid=2410 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.707000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:37:02.710000 audit[2554]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.710000 audit[2554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc48e2750 a2=0 a3=1 items=0 ppid=2410 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.710000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:37:02.711000 audit[2555]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.711000 audit[2555]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd3555040 a2=0 a3=1 items=0 ppid=2410 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.711000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:37:02.714000 audit[2557]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.714000 audit[2557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd20a8230 a2=0 a3=1 items=0 ppid=2410 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.714000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 14:37:02.718000 audit[2560]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.718000 audit[2560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe6346d00 a2=0 a3=1 items=0 ppid=2410 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.718000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:37:02.719000 audit[2561]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2561 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.719000 audit[2561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3d3f330 a2=0 a3=1 items=0 ppid=2410 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.719000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:37:02.721000 audit[2563]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.721000 audit[2563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff8dcdd00 a2=0 a3=1 items=0 ppid=2410 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.721000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:37:02.723000 audit[2564]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.723000 audit[2564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd52b87b0 a2=0 a3=1 items=0 ppid=2410 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.723000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:37:02.725000 audit[2566]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.725000 audit[2566]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe0a0dcd0 a2=0 a3=1 items=0 ppid=2410 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.725000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:37:02.730000 audit[2569]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.730000 audit[2569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdd5d7f20 a2=0 a3=1 items=0 ppid=2410 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.730000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:37:02.734000 audit[2572]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2572 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.734000 audit[2572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd542d900 a2=0 a3=1 items=0 ppid=2410 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.734000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 14:37:02.736000 audit[2573]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.736000 audit[2573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc3519cb0 a2=0 a3=1 items=0 ppid=2410 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.736000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:37:02.738000 audit[2575]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.738000 audit[2575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffdf4b8180 a2=0 a3=1 items=0 ppid=2410 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.738000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:37:02.742000 audit[2578]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.742000 audit[2578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe3937a90 a2=0 a3=1 items=0 ppid=2410 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.742000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:37:02.743000 audit[2579]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.743000 audit[2579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1e7aa00 a2=0 a3=1 items=0 ppid=2410 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.743000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:37:02.746000 audit[2581]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.746000 audit[2581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd7625640 a2=0 a3=1 items=0 ppid=2410 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.746000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:37:02.747000 audit[2582]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.747000 audit[2582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7869860 a2=0 a3=1 items=0 ppid=2410 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.747000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:37:02.750000 audit[2584]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2584 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.750000 audit[2584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff429d3e0 a2=0 a3=1 items=0 ppid=2410 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.750000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:37:02.753000 audit[2587]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:37:02.753000 audit[2587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd9b80980 a2=0 a3=1 items=0 ppid=2410 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:37:02.757000 audit[2589]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2589 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:37:02.757000 audit[2589]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=ffffdb0cf5f0 a2=0 a3=1 items=0 ppid=2410 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.757000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:02.757000 audit[2589]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2589 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:37:02.757000 audit[2589]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffdb0cf5f0 a2=0 a3=1 items=0 ppid=2410 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:02.757000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:03.359139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530601858.mount: Deactivated successfully. Jun 25 14:37:03.664887 containerd[1245]: time="2024-06-25T14:37:03.664772344Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:03.666028 containerd[1245]: time="2024-06-25T14:37:03.665981307Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473670" Jun 25 14:37:03.666692 containerd[1245]: time="2024-06-25T14:37:03.666656589Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:03.668984 containerd[1245]: time="2024-06-25T14:37:03.668940874Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:03.670033 containerd[1245]: time="2024-06-25T14:37:03.669988917Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:03.670886 containerd[1245]: time="2024-06-25T14:37:03.670839559Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.234787493s" Jun 25 14:37:03.670886 containerd[1245]: time="2024-06-25T14:37:03.670882519Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jun 25 14:37:03.676434 containerd[1245]: time="2024-06-25T14:37:03.676390213Z" level=info msg="CreateContainer within sandbox \"7fdf101b1fe6a527d3c6edc2c4d8d5a7d4861742062c3462afc371211aa89a26\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 14:37:03.688049 containerd[1245]: time="2024-06-25T14:37:03.688000002Z" level=info msg="CreateContainer within sandbox \"7fdf101b1fe6a527d3c6edc2c4d8d5a7d4861742062c3462afc371211aa89a26\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7ca78b998b2bcb2b8a48254eb0a53386ed11add561e13cb52b6f64ca45e8b99a\"" Jun 25 14:37:03.688588 containerd[1245]: time="2024-06-25T14:37:03.688541843Z" level=info msg="StartContainer for \"7ca78b998b2bcb2b8a48254eb0a53386ed11add561e13cb52b6f64ca45e8b99a\"" Jun 25 14:37:03.717133 systemd[1]: Started cri-containerd-7ca78b998b2bcb2b8a48254eb0a53386ed11add561e13cb52b6f64ca45e8b99a.scope - libcontainer container 7ca78b998b2bcb2b8a48254eb0a53386ed11add561e13cb52b6f64ca45e8b99a. Jun 25 14:37:03.724000 audit: BPF prog-id=113 op=LOAD Jun 25 14:37:03.726442 kernel: kauditd_printk_skb: 193 callbacks suppressed Jun 25 14:37:03.726494 kernel: audit: type=1334 audit(1719326223.724:469): prog-id=113 op=LOAD Jun 25 14:37:03.725000 audit: BPF prog-id=114 op=LOAD Jun 25 14:37:03.725000 audit[2606]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2435 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:03.730518 kernel: audit: type=1334 audit(1719326223.725:470): prog-id=114 op=LOAD Jun 25 14:37:03.730583 kernel: audit: type=1300 audit(1719326223.725:470): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2435 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:03.730615 kernel: audit: type=1327 audit(1719326223.725:470): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763613738623939386232626362326238613438323534656230613533 Jun 25 14:37:03.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763613738623939386232626362326238613438323534656230613533 Jun 25 14:37:03.725000 audit: BPF prog-id=115 op=LOAD Jun 25 14:37:03.734043 kernel: audit: type=1334 audit(1719326223.725:471): prog-id=115 op=LOAD Jun 25 14:37:03.734417 kernel: audit: type=1300 audit(1719326223.725:471): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2435 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:03.725000 audit[2606]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2435 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:03.736995 kernel: audit: type=1327 audit(1719326223.725:471): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763613738623939386232626362326238613438323534656230613533 Jun 25 14:37:03.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763613738623939386232626362326238613438323534656230613533 Jun 25 14:37:03.726000 audit: BPF prog-id=115 op=UNLOAD Jun 25 14:37:03.740736 kernel: audit: type=1334 audit(1719326223.726:472): prog-id=115 op=UNLOAD Jun 25 14:37:03.726000 audit: BPF prog-id=114 op=UNLOAD Jun 25 14:37:03.741678 kernel: audit: type=1334 audit(1719326223.726:473): prog-id=114 op=UNLOAD Jun 25 14:37:03.741784 kernel: audit: type=1334 audit(1719326223.726:474): prog-id=116 op=LOAD Jun 25 14:37:03.726000 audit: BPF prog-id=116 op=LOAD Jun 25 14:37:03.726000 audit[2606]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2435 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:03.726000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763613738623939386232626362326238613438323534656230613533 Jun 25 14:37:03.774553 containerd[1245]: time="2024-06-25T14:37:03.774499977Z" level=info msg="StartContainer for \"7ca78b998b2bcb2b8a48254eb0a53386ed11add561e13cb52b6f64ca45e8b99a\" returns successfully" Jun 25 14:37:07.291000 audit[2643]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:07.291000 audit[2643]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffdc689d40 a2=0 a3=1 items=0 ppid=2410 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:07.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:07.291000 audit[2643]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:07.291000 audit[2643]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdc689d40 a2=0 a3=1 items=0 ppid=2410 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:07.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:07.302000 audit[2645]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:07.302000 audit[2645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffd77948d0 a2=0 a3=1 items=0 ppid=2410 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:07.302000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:07.305000 audit[2645]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:07.305000 audit[2645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd77948d0 a2=0 a3=1 items=0 ppid=2410 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:07.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:07.422943 kubelet[2249]: I0625 14:37:07.422881 2249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-dm5rq" podStartSLOduration=5.177636779 podStartE2EDuration="6.422860339s" podCreationTimestamp="2024-06-25 14:37:01 +0000 UTC" firstStartedPulling="2024-06-25 14:37:02.428429726 +0000 UTC m=+15.993616953" lastFinishedPulling="2024-06-25 14:37:03.673653286 +0000 UTC m=+17.238840513" observedRunningTime="2024-06-25 14:37:04.582890463 +0000 UTC m=+18.148077690" watchObservedRunningTime="2024-06-25 14:37:07.422860339 +0000 UTC m=+20.988047566" Jun 25 14:37:07.426410 kubelet[2249]: I0625 14:37:07.426365 2249 topology_manager.go:215] "Topology Admit Handler" podUID="2a17c037-4f08-4e81-99fe-d6a865822705" podNamespace="calico-system" podName="calico-typha-569fb4ff45-d9dbc" Jun 25 14:37:07.443671 systemd[1]: Created slice kubepods-besteffort-pod2a17c037_4f08_4e81_99fe_d6a865822705.slice - libcontainer container kubepods-besteffort-pod2a17c037_4f08_4e81_99fe_d6a865822705.slice. Jun 25 14:37:07.456743 kubelet[2249]: I0625 14:37:07.456684 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97zzn\" (UniqueName: \"kubernetes.io/projected/2a17c037-4f08-4e81-99fe-d6a865822705-kube-api-access-97zzn\") pod \"calico-typha-569fb4ff45-d9dbc\" (UID: \"2a17c037-4f08-4e81-99fe-d6a865822705\") " pod="calico-system/calico-typha-569fb4ff45-d9dbc" Jun 25 14:37:07.456743 kubelet[2249]: I0625 14:37:07.456732 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a17c037-4f08-4e81-99fe-d6a865822705-tigera-ca-bundle\") pod \"calico-typha-569fb4ff45-d9dbc\" (UID: \"2a17c037-4f08-4e81-99fe-d6a865822705\") " pod="calico-system/calico-typha-569fb4ff45-d9dbc" Jun 25 14:37:07.456743 kubelet[2249]: I0625 14:37:07.456752 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2a17c037-4f08-4e81-99fe-d6a865822705-typha-certs\") pod \"calico-typha-569fb4ff45-d9dbc\" (UID: \"2a17c037-4f08-4e81-99fe-d6a865822705\") " pod="calico-system/calico-typha-569fb4ff45-d9dbc" Jun 25 14:37:07.523437 kubelet[2249]: I0625 14:37:07.523379 2249 topology_manager.go:215] "Topology Admit Handler" podUID="dd1da708-a778-4123-acdf-17a2f26a1400" podNamespace="calico-system" podName="calico-node-nnkxj" Jun 25 14:37:07.529543 systemd[1]: Created slice kubepods-besteffort-poddd1da708_a778_4123_acdf_17a2f26a1400.slice - libcontainer container kubepods-besteffort-poddd1da708_a778_4123_acdf_17a2f26a1400.slice. Jun 25 14:37:07.557390 kubelet[2249]: I0625 14:37:07.557264 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd1da708-a778-4123-acdf-17a2f26a1400-lib-modules\") pod \"calico-node-nnkxj\" (UID: \"dd1da708-a778-4123-acdf-17a2f26a1400\") " pod="calico-system/calico-node-nnkxj" Jun 25 14:37:07.557390 kubelet[2249]: I0625 14:37:07.557310 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dd1da708-a778-4123-acdf-17a2f26a1400-var-lib-calico\") pod \"calico-node-nnkxj\" (UID: \"dd1da708-a778-4123-acdf-17a2f26a1400\") " pod="calico-system/calico-node-nnkxj" Jun 25 14:37:07.557390 kubelet[2249]: I0625 14:37:07.557362 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dd1da708-a778-4123-acdf-17a2f26a1400-cni-net-dir\") pod \"calico-node-nnkxj\" (UID: \"dd1da708-a778-4123-acdf-17a2f26a1400\") " pod="calico-system/calico-node-nnkxj" Jun 25 14:37:07.557390 kubelet[2249]: I0625 14:37:07.557380 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dd1da708-a778-4123-acdf-17a2f26a1400-node-certs\") pod \"calico-node-nnkxj\" (UID: \"dd1da708-a778-4123-acdf-17a2f26a1400\") " pod="calico-system/calico-node-nnkxj" Jun 25 14:37:07.557390 kubelet[2249]: I0625 14:37:07.557396 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dd1da708-a778-4123-acdf-17a2f26a1400-cni-log-dir\") pod \"calico-node-nnkxj\" (UID: \"dd1da708-a778-4123-acdf-17a2f26a1400\") " pod="calico-system/calico-node-nnkxj" Jun 25 14:37:07.557630 kubelet[2249]: I0625 14:37:07.557414 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dd1da708-a778-4123-acdf-17a2f26a1400-flexvol-driver-host\") pod \"calico-node-nnkxj\" (UID: \"dd1da708-a778-4123-acdf-17a2f26a1400\") " pod="calico-system/calico-node-nnkxj" Jun 25 14:37:07.557630 kubelet[2249]: I0625 14:37:07.557432 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dd1da708-a778-4123-acdf-17a2f26a1400-var-run-calico\") pod \"calico-node-nnkxj\" (UID: \"dd1da708-a778-4123-acdf-17a2f26a1400\") " pod="calico-system/calico-node-nnkxj" Jun 25 14:37:07.557630 kubelet[2249]: I0625 14:37:07.557448 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dd1da708-a778-4123-acdf-17a2f26a1400-policysync\") pod \"calico-node-nnkxj\" (UID: \"dd1da708-a778-4123-acdf-17a2f26a1400\") " pod="calico-system/calico-node-nnkxj" Jun 25 14:37:07.557630 kubelet[2249]: I0625 14:37:07.557464 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5j4\" (UniqueName: \"kubernetes.io/projected/dd1da708-a778-4123-acdf-17a2f26a1400-kube-api-access-jz5j4\") pod \"calico-node-nnkxj\" (UID: \"dd1da708-a778-4123-acdf-17a2f26a1400\") " pod="calico-system/calico-node-nnkxj" Jun 25 14:37:07.557630 kubelet[2249]: I0625 14:37:07.557482 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd1da708-a778-4123-acdf-17a2f26a1400-xtables-lock\") pod \"calico-node-nnkxj\" (UID: \"dd1da708-a778-4123-acdf-17a2f26a1400\") " pod="calico-system/calico-node-nnkxj" Jun 25 14:37:07.557748 kubelet[2249]: I0625 14:37:07.557499 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1da708-a778-4123-acdf-17a2f26a1400-tigera-ca-bundle\") pod \"calico-node-nnkxj\" (UID: \"dd1da708-a778-4123-acdf-17a2f26a1400\") " pod="calico-system/calico-node-nnkxj" Jun 25 14:37:07.557748 kubelet[2249]: I0625 14:37:07.557515 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dd1da708-a778-4123-acdf-17a2f26a1400-cni-bin-dir\") pod \"calico-node-nnkxj\" (UID: \"dd1da708-a778-4123-acdf-17a2f26a1400\") " pod="calico-system/calico-node-nnkxj" Jun 25 14:37:07.648763 kubelet[2249]: I0625 14:37:07.648719 2249 topology_manager.go:215] "Topology Admit Handler" podUID="d97e3989-35c8-44ea-83c9-925e939d51bb" podNamespace="calico-system" podName="csi-node-driver-kfl4t" Jun 25 14:37:07.649259 kubelet[2249]: E0625 14:37:07.649215 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfl4t" podUID="d97e3989-35c8-44ea-83c9-925e939d51bb" Jun 25 14:37:07.658499 kubelet[2249]: I0625 14:37:07.658437 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d97e3989-35c8-44ea-83c9-925e939d51bb-registration-dir\") pod \"csi-node-driver-kfl4t\" (UID: \"d97e3989-35c8-44ea-83c9-925e939d51bb\") " pod="calico-system/csi-node-driver-kfl4t" Jun 25 14:37:07.658666 kubelet[2249]: I0625 14:37:07.658513 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d97e3989-35c8-44ea-83c9-925e939d51bb-varrun\") pod \"csi-node-driver-kfl4t\" (UID: \"d97e3989-35c8-44ea-83c9-925e939d51bb\") " pod="calico-system/csi-node-driver-kfl4t" Jun 25 14:37:07.658666 kubelet[2249]: I0625 14:37:07.658572 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d97e3989-35c8-44ea-83c9-925e939d51bb-kubelet-dir\") pod \"csi-node-driver-kfl4t\" (UID: \"d97e3989-35c8-44ea-83c9-925e939d51bb\") " pod="calico-system/csi-node-driver-kfl4t" Jun 25 14:37:07.658666 kubelet[2249]: I0625 14:37:07.658618 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glntr\" (UniqueName: \"kubernetes.io/projected/d97e3989-35c8-44ea-83c9-925e939d51bb-kube-api-access-glntr\") pod \"csi-node-driver-kfl4t\" (UID: \"d97e3989-35c8-44ea-83c9-925e939d51bb\") " pod="calico-system/csi-node-driver-kfl4t" Jun 25 14:37:07.658666 kubelet[2249]: I0625 14:37:07.658648 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d97e3989-35c8-44ea-83c9-925e939d51bb-socket-dir\") pod \"csi-node-driver-kfl4t\" (UID: \"d97e3989-35c8-44ea-83c9-925e939d51bb\") " pod="calico-system/csi-node-driver-kfl4t" Jun 25 14:37:07.668730 kubelet[2249]: E0625 14:37:07.668639 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.668730 kubelet[2249]: W0625 14:37:07.668662 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.668730 kubelet[2249]: E0625 14:37:07.668683 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.678410 kubelet[2249]: E0625 14:37:07.678372 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.678410 kubelet[2249]: W0625 14:37:07.678396 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.678410 kubelet[2249]: E0625 14:37:07.678418 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.748363 kubelet[2249]: E0625 14:37:07.748314 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:07.749680 containerd[1245]: time="2024-06-25T14:37:07.749019968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-569fb4ff45-d9dbc,Uid:2a17c037-4f08-4e81-99fe-d6a865822705,Namespace:calico-system,Attempt:0,}" Jun 25 14:37:07.760040 kubelet[2249]: E0625 14:37:07.760008 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.760040 kubelet[2249]: W0625 14:37:07.760030 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.760212 kubelet[2249]: E0625 14:37:07.760052 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.760326 kubelet[2249]: E0625 14:37:07.760312 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.760326 kubelet[2249]: W0625 14:37:07.760324 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.760388 kubelet[2249]: E0625 14:37:07.760337 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.760551 kubelet[2249]: E0625 14:37:07.760539 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.760577 kubelet[2249]: W0625 14:37:07.760551 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.760577 kubelet[2249]: E0625 14:37:07.760565 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.760761 kubelet[2249]: E0625 14:37:07.760751 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.760787 kubelet[2249]: W0625 14:37:07.760761 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.760787 kubelet[2249]: E0625 14:37:07.760774 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.760950 kubelet[2249]: E0625 14:37:07.760940 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.760984 kubelet[2249]: W0625 14:37:07.760951 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.760984 kubelet[2249]: E0625 14:37:07.760961 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.761162 kubelet[2249]: E0625 14:37:07.761146 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.761192 kubelet[2249]: W0625 14:37:07.761162 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.761192 kubelet[2249]: E0625 14:37:07.761175 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.761337 kubelet[2249]: E0625 14:37:07.761325 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.761364 kubelet[2249]: W0625 14:37:07.761337 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.761407 kubelet[2249]: E0625 14:37:07.761382 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.761558 kubelet[2249]: E0625 14:37:07.761544 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.761558 kubelet[2249]: W0625 14:37:07.761556 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.761639 kubelet[2249]: E0625 14:37:07.761589 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.761769 kubelet[2249]: E0625 14:37:07.761753 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.761769 kubelet[2249]: W0625 14:37:07.761768 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.761871 kubelet[2249]: E0625 14:37:07.761851 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.761939 kubelet[2249]: E0625 14:37:07.761918 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.761939 kubelet[2249]: W0625 14:37:07.761935 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.762314 kubelet[2249]: E0625 14:37:07.762287 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.762875 kubelet[2249]: E0625 14:37:07.762408 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.762875 kubelet[2249]: W0625 14:37:07.762419 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.762875 kubelet[2249]: E0625 14:37:07.762493 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.762875 kubelet[2249]: E0625 14:37:07.762560 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.762875 kubelet[2249]: W0625 14:37:07.762567 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.762875 kubelet[2249]: E0625 14:37:07.762637 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.762875 kubelet[2249]: E0625 14:37:07.762730 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.762875 kubelet[2249]: W0625 14:37:07.762737 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.762875 kubelet[2249]: E0625 14:37:07.762745 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.763190 kubelet[2249]: E0625 14:37:07.763046 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.763190 kubelet[2249]: W0625 14:37:07.763057 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.763190 kubelet[2249]: E0625 14:37:07.763077 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.763353 kubelet[2249]: E0625 14:37:07.763300 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.763353 kubelet[2249]: W0625 14:37:07.763308 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.763353 kubelet[2249]: E0625 14:37:07.763320 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.763632 kubelet[2249]: E0625 14:37:07.763488 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.763632 kubelet[2249]: W0625 14:37:07.763501 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.763632 kubelet[2249]: E0625 14:37:07.763512 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.763729 kubelet[2249]: E0625 14:37:07.763639 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.763729 kubelet[2249]: W0625 14:37:07.763646 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.763729 kubelet[2249]: E0625 14:37:07.763656 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.763793 kubelet[2249]: E0625 14:37:07.763778 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.763793 kubelet[2249]: W0625 14:37:07.763785 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.763862 kubelet[2249]: E0625 14:37:07.763829 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.764035 kubelet[2249]: E0625 14:37:07.763932 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.764035 kubelet[2249]: W0625 14:37:07.763942 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.764035 kubelet[2249]: E0625 14:37:07.763963 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.764152 kubelet[2249]: E0625 14:37:07.764095 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.764152 kubelet[2249]: W0625 14:37:07.764103 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.764152 kubelet[2249]: E0625 14:37:07.764126 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.764266 kubelet[2249]: E0625 14:37:07.764251 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.764266 kubelet[2249]: W0625 14:37:07.764262 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.764329 kubelet[2249]: E0625 14:37:07.764276 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.765071 kubelet[2249]: E0625 14:37:07.764464 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.765071 kubelet[2249]: W0625 14:37:07.764476 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.765071 kubelet[2249]: E0625 14:37:07.764488 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.765071 kubelet[2249]: E0625 14:37:07.764659 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.765071 kubelet[2249]: W0625 14:37:07.764667 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.765071 kubelet[2249]: E0625 14:37:07.764682 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.765071 kubelet[2249]: E0625 14:37:07.764918 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.765071 kubelet[2249]: W0625 14:37:07.764932 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.765071 kubelet[2249]: E0625 14:37:07.764946 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.766377 kubelet[2249]: E0625 14:37:07.766352 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.766377 kubelet[2249]: W0625 14:37:07.766374 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.766505 kubelet[2249]: E0625 14:37:07.766394 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.793409 kubelet[2249]: E0625 14:37:07.793381 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:07.793409 kubelet[2249]: W0625 14:37:07.793400 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:07.793596 kubelet[2249]: E0625 14:37:07.793419 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:07.818219 containerd[1245]: time="2024-06-25T14:37:07.817474340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:37:07.818219 containerd[1245]: time="2024-06-25T14:37:07.818114781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:07.818219 containerd[1245]: time="2024-06-25T14:37:07.818133781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:37:07.818219 containerd[1245]: time="2024-06-25T14:37:07.818145621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:07.832525 kubelet[2249]: E0625 14:37:07.832478 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:07.833439 containerd[1245]: time="2024-06-25T14:37:07.833389050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nnkxj,Uid:dd1da708-a778-4123-acdf-17a2f26a1400,Namespace:calico-system,Attempt:0,}" Jun 25 14:37:07.852192 systemd[1]: Started cri-containerd-4ae77d0dd5ed93a36555784607d610f89160884fe1e2c769b339dfdb62b863a5.scope - libcontainer container 4ae77d0dd5ed93a36555784607d610f89160884fe1e2c769b339dfdb62b863a5. Jun 25 14:37:07.865000 audit: BPF prog-id=117 op=LOAD Jun 25 14:37:07.865000 audit: BPF prog-id=118 op=LOAD Jun 25 14:37:07.865000 audit[2696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400019d8b0 a2=78 a3=0 items=0 ppid=2687 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:07.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461653737643064643565643933613336353535373834363037643631 Jun 25 14:37:07.865000 audit: BPF prog-id=119 op=LOAD Jun 25 14:37:07.865000 audit[2696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400019d640 a2=78 a3=0 items=0 ppid=2687 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:07.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461653737643064643565643933613336353535373834363037643631 Jun 25 14:37:07.866000 audit: BPF prog-id=119 op=UNLOAD Jun 25 14:37:07.866000 audit: BPF prog-id=118 op=UNLOAD Jun 25 14:37:07.866000 audit: BPF prog-id=120 op=LOAD Jun 25 14:37:07.866000 audit[2696]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400019db10 a2=78 a3=0 items=0 ppid=2687 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:07.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461653737643064643565643933613336353535373834363037643631 Jun 25 14:37:07.894564 containerd[1245]: time="2024-06-25T14:37:07.894517488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-569fb4ff45-d9dbc,Uid:2a17c037-4f08-4e81-99fe-d6a865822705,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ae77d0dd5ed93a36555784607d610f89160884fe1e2c769b339dfdb62b863a5\"" Jun 25 14:37:07.896320 kubelet[2249]: E0625 14:37:07.895937 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:07.900035 containerd[1245]: time="2024-06-25T14:37:07.899989739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 14:37:07.910519 containerd[1245]: time="2024-06-25T14:37:07.910199558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:37:07.910519 containerd[1245]: time="2024-06-25T14:37:07.910300758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:07.910519 containerd[1245]: time="2024-06-25T14:37:07.910321039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:37:07.910519 containerd[1245]: time="2024-06-25T14:37:07.910333639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:07.925192 systemd[1]: Started cri-containerd-15bd2b2502bdbc58b2574d37da83ac9a5199b89cb89f858967e06ba46c4ba9bd.scope - libcontainer container 15bd2b2502bdbc58b2574d37da83ac9a5199b89cb89f858967e06ba46c4ba9bd. Jun 25 14:37:07.942000 audit: BPF prog-id=121 op=LOAD Jun 25 14:37:07.942000 audit: BPF prog-id=122 op=LOAD Jun 25 14:37:07.942000 audit[2738]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2728 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:07.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626432623235303262646263353862323537346433376461383361 Jun 25 14:37:07.943000 audit: BPF prog-id=123 op=LOAD Jun 25 14:37:07.943000 audit[2738]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2728 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:07.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626432623235303262646263353862323537346433376461383361 Jun 25 14:37:07.943000 audit: BPF prog-id=123 op=UNLOAD Jun 25 14:37:07.943000 audit: BPF prog-id=122 op=UNLOAD Jun 25 14:37:07.943000 audit: BPF prog-id=124 op=LOAD Jun 25 14:37:07.943000 audit[2738]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2728 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:07.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135626432623235303262646263353862323537346433376461383361 Jun 25 14:37:07.964099 containerd[1245]: time="2024-06-25T14:37:07.964054342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nnkxj,Uid:dd1da708-a778-4123-acdf-17a2f26a1400,Namespace:calico-system,Attempt:0,} returns sandbox id \"15bd2b2502bdbc58b2574d37da83ac9a5199b89cb89f858967e06ba46c4ba9bd\"" Jun 25 14:37:07.965006 kubelet[2249]: E0625 14:37:07.964782 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:08.318000 audit[2762]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:08.318000 audit[2762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffffb929ba0 a2=0 a3=1 items=0 ppid=2410 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:08.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:08.319000 audit[2762]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:08.319000 audit[2762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb929ba0 a2=0 a3=1 items=0 ppid=2410 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:08.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:09.527386 kubelet[2249]: E0625 14:37:09.527338 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfl4t" podUID="d97e3989-35c8-44ea-83c9-925e939d51bb" Jun 25 14:37:10.444353 containerd[1245]: time="2024-06-25T14:37:10.444244176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:10.444842 containerd[1245]: time="2024-06-25T14:37:10.444806537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jun 25 14:37:10.445758 containerd[1245]: time="2024-06-25T14:37:10.445723698Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:10.449226 containerd[1245]: time="2024-06-25T14:37:10.449187264Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:10.450725 containerd[1245]: time="2024-06-25T14:37:10.450680506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:10.451593 containerd[1245]: time="2024-06-25T14:37:10.451549947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.551512128s" Jun 25 14:37:10.451644 containerd[1245]: time="2024-06-25T14:37:10.451591387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jun 25 14:37:10.452883 containerd[1245]: time="2024-06-25T14:37:10.452851549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 14:37:10.471347 containerd[1245]: time="2024-06-25T14:37:10.471295819Z" level=info msg="CreateContainer within sandbox \"4ae77d0dd5ed93a36555784607d610f89160884fe1e2c769b339dfdb62b863a5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 14:37:10.482276 containerd[1245]: time="2024-06-25T14:37:10.482217436Z" level=info msg="CreateContainer within sandbox \"4ae77d0dd5ed93a36555784607d610f89160884fe1e2c769b339dfdb62b863a5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"00da9db9616432a9b938583ed799b11bc96ea47b1bfb378a4039181f4fabd447\"" Jun 25 14:37:10.485014 containerd[1245]: time="2024-06-25T14:37:10.484933280Z" level=info msg="StartContainer for \"00da9db9616432a9b938583ed799b11bc96ea47b1bfb378a4039181f4fabd447\"" Jun 25 14:37:10.517173 systemd[1]: Started cri-containerd-00da9db9616432a9b938583ed799b11bc96ea47b1bfb378a4039181f4fabd447.scope - libcontainer container 00da9db9616432a9b938583ed799b11bc96ea47b1bfb378a4039181f4fabd447. Jun 25 14:37:10.526000 audit: BPF prog-id=125 op=LOAD Jun 25 14:37:10.528226 kernel: kauditd_printk_skb: 44 callbacks suppressed Jun 25 14:37:10.528302 kernel: audit: type=1334 audit(1719326230.526:493): prog-id=125 op=LOAD Jun 25 14:37:10.528000 audit: BPF prog-id=126 op=LOAD Jun 25 14:37:10.528000 audit[2776]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2687 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:10.533589 kernel: audit: type=1334 audit(1719326230.528:494): prog-id=126 op=LOAD Jun 25 14:37:10.533711 kernel: audit: type=1300 audit(1719326230.528:494): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2687 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:10.533849 kernel: audit: type=1327 audit(1719326230.528:494): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030646139646239363136343332613962393338353833656437393962 Jun 25 14:37:10.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030646139646239363136343332613962393338353833656437393962 Jun 25 14:37:10.529000 audit: BPF prog-id=127 op=LOAD Jun 25 14:37:10.536985 kernel: audit: type=1334 audit(1719326230.529:495): prog-id=127 op=LOAD Jun 25 14:37:10.529000 audit[2776]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2687 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:10.539987 kernel: audit: type=1300 audit(1719326230.529:495): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2687 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:10.540599 kernel: audit: type=1327 audit(1719326230.529:495): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030646139646239363136343332613962393338353833656437393962 Jun 25 14:37:10.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030646139646239363136343332613962393338353833656437393962 Jun 25 14:37:10.545049 kernel: audit: type=1334 audit(1719326230.532:496): prog-id=127 op=UNLOAD Jun 25 14:37:10.545126 kernel: audit: type=1334 audit(1719326230.532:497): prog-id=126 op=UNLOAD Jun 25 14:37:10.545153 kernel: audit: type=1334 audit(1719326230.532:498): prog-id=128 op=LOAD Jun 25 14:37:10.532000 audit: BPF prog-id=127 op=UNLOAD Jun 25 14:37:10.532000 audit: BPF prog-id=126 op=UNLOAD Jun 25 14:37:10.532000 audit: BPF prog-id=128 op=LOAD Jun 25 14:37:10.532000 audit[2776]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2687 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:10.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030646139646239363136343332613962393338353833656437393962 Jun 25 14:37:10.566087 containerd[1245]: time="2024-06-25T14:37:10.566040409Z" level=info msg="StartContainer for \"00da9db9616432a9b938583ed799b11bc96ea47b1bfb378a4039181f4fabd447\" returns successfully" Jun 25 14:37:10.590364 kubelet[2249]: E0625 14:37:10.588274 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:10.672549 kubelet[2249]: E0625 14:37:10.672506 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.672549 kubelet[2249]: W0625 14:37:10.672537 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.672716 kubelet[2249]: E0625 14:37:10.672559 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.672823 kubelet[2249]: E0625 14:37:10.672807 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.672823 kubelet[2249]: W0625 14:37:10.672820 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.672896 kubelet[2249]: E0625 14:37:10.672831 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.673042 kubelet[2249]: E0625 14:37:10.673026 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.673042 kubelet[2249]: W0625 14:37:10.673039 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.673118 kubelet[2249]: E0625 14:37:10.673050 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.673244 kubelet[2249]: E0625 14:37:10.673218 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.673289 kubelet[2249]: W0625 14:37:10.673245 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.673289 kubelet[2249]: E0625 14:37:10.673256 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.673461 kubelet[2249]: E0625 14:37:10.673445 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.673461 kubelet[2249]: W0625 14:37:10.673461 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.673531 kubelet[2249]: E0625 14:37:10.673471 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.674545 kubelet[2249]: E0625 14:37:10.674520 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.674545 kubelet[2249]: W0625 14:37:10.674538 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.674673 kubelet[2249]: E0625 14:37:10.674554 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.674767 kubelet[2249]: E0625 14:37:10.674752 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.674767 kubelet[2249]: W0625 14:37:10.674763 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.674831 kubelet[2249]: E0625 14:37:10.674772 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.677242 kubelet[2249]: E0625 14:37:10.677205 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.677242 kubelet[2249]: W0625 14:37:10.677222 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.677242 kubelet[2249]: E0625 14:37:10.677242 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.677452 kubelet[2249]: E0625 14:37:10.677436 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.677452 kubelet[2249]: W0625 14:37:10.677448 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.677522 kubelet[2249]: E0625 14:37:10.677459 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.677605 kubelet[2249]: E0625 14:37:10.677591 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.677605 kubelet[2249]: W0625 14:37:10.677601 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.677668 kubelet[2249]: E0625 14:37:10.677612 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.677745 kubelet[2249]: E0625 14:37:10.677732 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.677745 kubelet[2249]: W0625 14:37:10.677742 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.677852 kubelet[2249]: E0625 14:37:10.677751 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.679066 kubelet[2249]: E0625 14:37:10.679038 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.679066 kubelet[2249]: W0625 14:37:10.679064 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.679171 kubelet[2249]: E0625 14:37:10.679079 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.679287 kubelet[2249]: E0625 14:37:10.679273 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.679287 kubelet[2249]: W0625 14:37:10.679284 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.679362 kubelet[2249]: E0625 14:37:10.679294 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.679522 kubelet[2249]: E0625 14:37:10.679504 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.679522 kubelet[2249]: W0625 14:37:10.679517 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.679606 kubelet[2249]: E0625 14:37:10.679528 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.679697 kubelet[2249]: E0625 14:37:10.679683 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.679697 kubelet[2249]: W0625 14:37:10.679695 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.679762 kubelet[2249]: E0625 14:37:10.679711 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.681125 kubelet[2249]: E0625 14:37:10.681095 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.681125 kubelet[2249]: W0625 14:37:10.681122 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.681243 kubelet[2249]: E0625 14:37:10.681136 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.681382 kubelet[2249]: E0625 14:37:10.681366 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.681382 kubelet[2249]: W0625 14:37:10.681378 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.681455 kubelet[2249]: E0625 14:37:10.681393 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.681652 kubelet[2249]: E0625 14:37:10.681630 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.681652 kubelet[2249]: W0625 14:37:10.681646 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.681744 kubelet[2249]: E0625 14:37:10.681663 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.681896 kubelet[2249]: E0625 14:37:10.681868 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.681896 kubelet[2249]: W0625 14:37:10.681880 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.681896 kubelet[2249]: E0625 14:37:10.681894 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.682194 kubelet[2249]: E0625 14:37:10.682168 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.682194 kubelet[2249]: W0625 14:37:10.682183 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.682194 kubelet[2249]: E0625 14:37:10.682198 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.682404 kubelet[2249]: E0625 14:37:10.682379 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.682404 kubelet[2249]: W0625 14:37:10.682393 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.682404 kubelet[2249]: E0625 14:37:10.682402 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.682674 kubelet[2249]: E0625 14:37:10.682650 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.682674 kubelet[2249]: W0625 14:37:10.682665 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.682765 kubelet[2249]: E0625 14:37:10.682734 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.682851 kubelet[2249]: E0625 14:37:10.682837 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.682851 kubelet[2249]: W0625 14:37:10.682848 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.682964 kubelet[2249]: E0625 14:37:10.682938 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.683156 kubelet[2249]: E0625 14:37:10.683122 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.683156 kubelet[2249]: W0625 14:37:10.683138 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.683156 kubelet[2249]: E0625 14:37:10.683155 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.686140 kubelet[2249]: E0625 14:37:10.686104 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.686140 kubelet[2249]: W0625 14:37:10.686126 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.686140 kubelet[2249]: E0625 14:37:10.686148 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.686933 kubelet[2249]: E0625 14:37:10.686908 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.686933 kubelet[2249]: W0625 14:37:10.686927 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.687125 kubelet[2249]: E0625 14:37:10.687101 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.687335 kubelet[2249]: E0625 14:37:10.687310 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.687335 kubelet[2249]: W0625 14:37:10.687325 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.687465 kubelet[2249]: E0625 14:37:10.687447 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.687552 kubelet[2249]: E0625 14:37:10.687495 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.687617 kubelet[2249]: W0625 14:37:10.687603 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.687711 kubelet[2249]: E0625 14:37:10.687687 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.687952 kubelet[2249]: E0625 14:37:10.687938 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.688058 kubelet[2249]: W0625 14:37:10.688043 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.688140 kubelet[2249]: E0625 14:37:10.688126 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.688393 kubelet[2249]: E0625 14:37:10.688370 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.688393 kubelet[2249]: W0625 14:37:10.688387 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.688473 kubelet[2249]: E0625 14:37:10.688404 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.689047 kubelet[2249]: E0625 14:37:10.689016 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.689047 kubelet[2249]: W0625 14:37:10.689035 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.689047 kubelet[2249]: E0625 14:37:10.689053 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.689515 kubelet[2249]: E0625 14:37:10.689499 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.689594 kubelet[2249]: W0625 14:37:10.689581 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.689668 kubelet[2249]: E0625 14:37:10.689656 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:10.692177 kubelet[2249]: E0625 14:37:10.692156 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:10.692313 kubelet[2249]: W0625 14:37:10.692296 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:10.692402 kubelet[2249]: E0625 14:37:10.692388 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.527307 kubelet[2249]: E0625 14:37:11.527263 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfl4t" podUID="d97e3989-35c8-44ea-83c9-925e939d51bb" Jun 25 14:37:11.590777 kubelet[2249]: I0625 14:37:11.590737 2249 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:37:11.591413 kubelet[2249]: E0625 14:37:11.591391 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:11.687382 kubelet[2249]: E0625 14:37:11.687353 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.687382 kubelet[2249]: W0625 14:37:11.687374 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.687561 kubelet[2249]: E0625 14:37:11.687396 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.687561 kubelet[2249]: E0625 14:37:11.687552 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.687615 kubelet[2249]: W0625 14:37:11.687562 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.687615 kubelet[2249]: E0625 14:37:11.687571 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.687775 kubelet[2249]: E0625 14:37:11.687758 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.687775 kubelet[2249]: W0625 14:37:11.687774 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.687849 kubelet[2249]: E0625 14:37:11.687784 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.687951 kubelet[2249]: E0625 14:37:11.687938 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.687951 kubelet[2249]: W0625 14:37:11.687949 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.688059 kubelet[2249]: E0625 14:37:11.687959 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.688154 kubelet[2249]: E0625 14:37:11.688140 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.688206 kubelet[2249]: W0625 14:37:11.688157 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.688206 kubelet[2249]: E0625 14:37:11.688167 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.688330 kubelet[2249]: E0625 14:37:11.688316 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.688330 kubelet[2249]: W0625 14:37:11.688327 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.688398 kubelet[2249]: E0625 14:37:11.688337 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.688498 kubelet[2249]: E0625 14:37:11.688485 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.688498 kubelet[2249]: W0625 14:37:11.688497 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.688566 kubelet[2249]: E0625 14:37:11.688507 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.688671 kubelet[2249]: E0625 14:37:11.688651 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.688671 kubelet[2249]: W0625 14:37:11.688668 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.688738 kubelet[2249]: E0625 14:37:11.688678 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.688893 kubelet[2249]: E0625 14:37:11.688877 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.688933 kubelet[2249]: W0625 14:37:11.688895 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.688933 kubelet[2249]: E0625 14:37:11.688906 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.689079 kubelet[2249]: E0625 14:37:11.689065 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.689079 kubelet[2249]: W0625 14:37:11.689076 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.689142 kubelet[2249]: E0625 14:37:11.689085 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.689250 kubelet[2249]: E0625 14:37:11.689234 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.689250 kubelet[2249]: W0625 14:37:11.689249 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.689325 kubelet[2249]: E0625 14:37:11.689259 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.689420 kubelet[2249]: E0625 14:37:11.689405 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.689420 kubelet[2249]: W0625 14:37:11.689416 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.689478 kubelet[2249]: E0625 14:37:11.689425 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.689637 kubelet[2249]: E0625 14:37:11.689624 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.689637 kubelet[2249]: W0625 14:37:11.689636 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.689710 kubelet[2249]: E0625 14:37:11.689645 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.689798 kubelet[2249]: E0625 14:37:11.689784 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.689798 kubelet[2249]: W0625 14:37:11.689796 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.689859 kubelet[2249]: E0625 14:37:11.689804 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.690001 kubelet[2249]: E0625 14:37:11.689985 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.690001 kubelet[2249]: W0625 14:37:11.689998 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.690083 kubelet[2249]: E0625 14:37:11.690008 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.690234 kubelet[2249]: E0625 14:37:11.690213 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.690275 kubelet[2249]: W0625 14:37:11.690234 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.690275 kubelet[2249]: E0625 14:37:11.690246 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.690458 kubelet[2249]: E0625 14:37:11.690446 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.690458 kubelet[2249]: W0625 14:37:11.690457 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.690531 kubelet[2249]: E0625 14:37:11.690471 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.690651 kubelet[2249]: E0625 14:37:11.690637 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.690651 kubelet[2249]: W0625 14:37:11.690649 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.690713 kubelet[2249]: E0625 14:37:11.690663 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.690879 kubelet[2249]: E0625 14:37:11.690863 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.690923 kubelet[2249]: W0625 14:37:11.690881 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.690923 kubelet[2249]: E0625 14:37:11.690917 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.691140 kubelet[2249]: E0625 14:37:11.691125 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.691187 kubelet[2249]: W0625 14:37:11.691140 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.691187 kubelet[2249]: E0625 14:37:11.691154 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.691315 kubelet[2249]: E0625 14:37:11.691300 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.691315 kubelet[2249]: W0625 14:37:11.691313 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.691393 kubelet[2249]: E0625 14:37:11.691326 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.691507 kubelet[2249]: E0625 14:37:11.691494 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.691507 kubelet[2249]: W0625 14:37:11.691505 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.691581 kubelet[2249]: E0625 14:37:11.691531 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.691696 kubelet[2249]: E0625 14:37:11.691683 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.691696 kubelet[2249]: W0625 14:37:11.691695 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.691769 kubelet[2249]: E0625 14:37:11.691738 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.691863 kubelet[2249]: E0625 14:37:11.691849 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.691863 kubelet[2249]: W0625 14:37:11.691859 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.691922 kubelet[2249]: E0625 14:37:11.691875 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.692076 kubelet[2249]: E0625 14:37:11.692061 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.692076 kubelet[2249]: W0625 14:37:11.692074 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.692160 kubelet[2249]: E0625 14:37:11.692088 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.692294 kubelet[2249]: E0625 14:37:11.692279 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.692294 kubelet[2249]: W0625 14:37:11.692292 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.692365 kubelet[2249]: E0625 14:37:11.692312 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.692535 kubelet[2249]: E0625 14:37:11.692519 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.692535 kubelet[2249]: W0625 14:37:11.692532 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.692611 kubelet[2249]: E0625 14:37:11.692547 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.692808 kubelet[2249]: E0625 14:37:11.692791 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.692851 kubelet[2249]: W0625 14:37:11.692810 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.692851 kubelet[2249]: E0625 14:37:11.692826 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.693097 kubelet[2249]: E0625 14:37:11.693083 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.693135 kubelet[2249]: W0625 14:37:11.693097 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.693135 kubelet[2249]: E0625 14:37:11.693111 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.693293 kubelet[2249]: E0625 14:37:11.693277 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.693293 kubelet[2249]: W0625 14:37:11.693289 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.693357 kubelet[2249]: E0625 14:37:11.693300 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.693485 kubelet[2249]: E0625 14:37:11.693473 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.693485 kubelet[2249]: W0625 14:37:11.693483 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.693546 kubelet[2249]: E0625 14:37:11.693494 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.693970 kubelet[2249]: E0625 14:37:11.693954 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.694039 kubelet[2249]: W0625 14:37:11.693968 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.694111 kubelet[2249]: E0625 14:37:11.694090 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.694175 kubelet[2249]: E0625 14:37:11.694161 2249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:37:11.694175 kubelet[2249]: W0625 14:37:11.694172 2249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:37:11.694294 kubelet[2249]: E0625 14:37:11.694182 2249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:37:11.730658 containerd[1245]: time="2024-06-25T14:37:11.730579025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:11.731780 containerd[1245]: time="2024-06-25T14:37:11.731745387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jun 25 14:37:11.733955 containerd[1245]: time="2024-06-25T14:37:11.733926790Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:11.735180 containerd[1245]: time="2024-06-25T14:37:11.735148832Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:11.737439 containerd[1245]: time="2024-06-25T14:37:11.737403355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:11.738120 containerd[1245]: time="2024-06-25T14:37:11.738044436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.285153646s" Jun 25 14:37:11.738183 containerd[1245]: time="2024-06-25T14:37:11.738120196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jun 25 14:37:11.744756 containerd[1245]: time="2024-06-25T14:37:11.744716526Z" level=info msg="CreateContainer within sandbox \"15bd2b2502bdbc58b2574d37da83ac9a5199b89cb89f858967e06ba46c4ba9bd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 14:37:11.755551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687221413.mount: Deactivated successfully. Jun 25 14:37:11.758488 containerd[1245]: time="2024-06-25T14:37:11.758445987Z" level=info msg="CreateContainer within sandbox \"15bd2b2502bdbc58b2574d37da83ac9a5199b89cb89f858967e06ba46c4ba9bd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"de9daf47bcf7926cd87cab177971677578902187a8822306432286222788f1de\"" Jun 25 14:37:11.760356 containerd[1245]: time="2024-06-25T14:37:11.759207068Z" level=info msg="StartContainer for \"de9daf47bcf7926cd87cab177971677578902187a8822306432286222788f1de\"" Jun 25 14:37:11.783202 systemd[1]: Started cri-containerd-de9daf47bcf7926cd87cab177971677578902187a8822306432286222788f1de.scope - libcontainer container de9daf47bcf7926cd87cab177971677578902187a8822306432286222788f1de. Jun 25 14:37:11.796000 audit: BPF prog-id=129 op=LOAD Jun 25 14:37:11.796000 audit[2885]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=2728 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:11.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465396461663437626366373932366364383763616231373739373136 Jun 25 14:37:11.796000 audit: BPF prog-id=130 op=LOAD Jun 25 14:37:11.796000 audit[2885]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=2728 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:11.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465396461663437626366373932366364383763616231373739373136 Jun 25 14:37:11.796000 audit: BPF prog-id=130 op=UNLOAD Jun 25 14:37:11.796000 audit: BPF prog-id=129 op=UNLOAD Jun 25 14:37:11.796000 audit: BPF prog-id=131 op=LOAD Jun 25 14:37:11.796000 audit[2885]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=2728 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:11.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465396461663437626366373932366364383763616231373739373136 Jun 25 14:37:11.813482 containerd[1245]: time="2024-06-25T14:37:11.813431189Z" level=info msg="StartContainer for \"de9daf47bcf7926cd87cab177971677578902187a8822306432286222788f1de\" returns successfully" Jun 25 14:37:11.833660 systemd[1]: cri-containerd-de9daf47bcf7926cd87cab177971677578902187a8822306432286222788f1de.scope: Deactivated successfully. Jun 25 14:37:11.841000 audit: BPF prog-id=131 op=UNLOAD Jun 25 14:37:11.870614 containerd[1245]: time="2024-06-25T14:37:11.870557394Z" level=info msg="shim disconnected" id=de9daf47bcf7926cd87cab177971677578902187a8822306432286222788f1de namespace=k8s.io Jun 25 14:37:11.870911 containerd[1245]: time="2024-06-25T14:37:11.870890754Z" level=warning msg="cleaning up after shim disconnected" id=de9daf47bcf7926cd87cab177971677578902187a8822306432286222788f1de namespace=k8s.io Jun 25 14:37:11.871003 containerd[1245]: time="2024-06-25T14:37:11.870987794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:37:12.465563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de9daf47bcf7926cd87cab177971677578902187a8822306432286222788f1de-rootfs.mount: Deactivated successfully. Jun 25 14:37:12.592189 kubelet[2249]: E0625 14:37:12.592160 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:12.593158 containerd[1245]: time="2024-06-25T14:37:12.593124774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 14:37:12.607679 kubelet[2249]: I0625 14:37:12.607300 2249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-569fb4ff45-d9dbc" podStartSLOduration=3.054393303 podStartE2EDuration="5.607282794s" podCreationTimestamp="2024-06-25 14:37:07 +0000 UTC" firstStartedPulling="2024-06-25 14:37:07.899522018 +0000 UTC m=+21.464709245" lastFinishedPulling="2024-06-25 14:37:10.452411509 +0000 UTC m=+24.017598736" observedRunningTime="2024-06-25 14:37:10.60446067 +0000 UTC m=+24.169647937" watchObservedRunningTime="2024-06-25 14:37:12.607282794 +0000 UTC m=+26.172470021" Jun 25 14:37:13.528167 kubelet[2249]: E0625 14:37:13.528118 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfl4t" podUID="d97e3989-35c8-44ea-83c9-925e939d51bb" Jun 25 14:37:15.528044 kubelet[2249]: E0625 14:37:15.527971 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfl4t" podUID="d97e3989-35c8-44ea-83c9-925e939d51bb" Jun 25 14:37:16.882690 containerd[1245]: time="2024-06-25T14:37:16.882642016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:16.883576 containerd[1245]: time="2024-06-25T14:37:16.883544377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jun 25 14:37:16.884256 containerd[1245]: time="2024-06-25T14:37:16.884221178Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:16.886090 containerd[1245]: time="2024-06-25T14:37:16.886061140Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:16.887448 containerd[1245]: time="2024-06-25T14:37:16.887418222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:16.888343 containerd[1245]: time="2024-06-25T14:37:16.888301103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 4.295004969s" Jun 25 14:37:16.888412 containerd[1245]: time="2024-06-25T14:37:16.888342503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jun 25 14:37:16.891041 containerd[1245]: time="2024-06-25T14:37:16.890953145Z" level=info msg="CreateContainer within sandbox \"15bd2b2502bdbc58b2574d37da83ac9a5199b89cb89f858967e06ba46c4ba9bd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 14:37:16.901539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561921352.mount: Deactivated successfully. Jun 25 14:37:16.905811 containerd[1245]: time="2024-06-25T14:37:16.905749681Z" level=info msg="CreateContainer within sandbox \"15bd2b2502bdbc58b2574d37da83ac9a5199b89cb89f858967e06ba46c4ba9bd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9ea90d3a7418227eb1b83bcc5f3506ea72704a32be7d7bcdaf2a5ae2bb5325bb\"" Jun 25 14:37:16.907223 containerd[1245]: time="2024-06-25T14:37:16.906375002Z" level=info msg="StartContainer for \"9ea90d3a7418227eb1b83bcc5f3506ea72704a32be7d7bcdaf2a5ae2bb5325bb\"" Jun 25 14:37:16.941209 systemd[1]: Started cri-containerd-9ea90d3a7418227eb1b83bcc5f3506ea72704a32be7d7bcdaf2a5ae2bb5325bb.scope - libcontainer container 9ea90d3a7418227eb1b83bcc5f3506ea72704a32be7d7bcdaf2a5ae2bb5325bb. Jun 25 14:37:16.951000 audit: BPF prog-id=132 op=LOAD Jun 25 14:37:16.953483 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 14:37:16.953570 kernel: audit: type=1334 audit(1719326236.951:505): prog-id=132 op=LOAD Jun 25 14:37:16.951000 audit[2958]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2728 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:16.957141 kernel: audit: type=1300 audit(1719326236.951:505): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2728 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:16.957222 kernel: audit: type=1327 audit(1719326236.951:505): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613930643361373431383232376562316238336263633566333530 Jun 25 14:37:16.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613930643361373431383232376562316238336263633566333530 Jun 25 14:37:16.959799 kernel: audit: type=1334 audit(1719326236.951:506): prog-id=133 op=LOAD Jun 25 14:37:16.951000 audit: BPF prog-id=133 op=LOAD Jun 25 14:37:16.960432 kernel: audit: type=1300 audit(1719326236.951:506): arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2728 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:16.951000 audit[2958]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2728 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:16.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613930643361373431383232376562316238336263633566333530 Jun 25 14:37:16.965659 kernel: audit: type=1327 audit(1719326236.951:506): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613930643361373431383232376562316238336263633566333530 Jun 25 14:37:16.965751 kernel: audit: type=1334 audit(1719326236.952:507): prog-id=133 op=UNLOAD Jun 25 14:37:16.952000 audit: BPF prog-id=133 op=UNLOAD Jun 25 14:37:16.966335 kernel: audit: type=1334 audit(1719326236.952:508): prog-id=132 op=UNLOAD Jun 25 14:37:16.952000 audit: BPF prog-id=132 op=UNLOAD Jun 25 14:37:16.952000 audit: BPF prog-id=134 op=LOAD Jun 25 14:37:16.967574 kernel: audit: type=1334 audit(1719326236.952:509): prog-id=134 op=LOAD Jun 25 14:37:16.952000 audit[2958]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2728 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:16.970725 kernel: audit: type=1300 audit(1719326236.952:509): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2728 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:16.952000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613930643361373431383232376562316238336263633566333530 Jun 25 14:37:17.034130 containerd[1245]: time="2024-06-25T14:37:17.034071497Z" level=info msg="StartContainer for \"9ea90d3a7418227eb1b83bcc5f3506ea72704a32be7d7bcdaf2a5ae2bb5325bb\" returns successfully" Jun 25 14:37:17.498346 systemd[1]: cri-containerd-9ea90d3a7418227eb1b83bcc5f3506ea72704a32be7d7bcdaf2a5ae2bb5325bb.scope: Deactivated successfully. Jun 25 14:37:17.502000 audit: BPF prog-id=134 op=UNLOAD Jun 25 14:37:17.516917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ea90d3a7418227eb1b83bcc5f3506ea72704a32be7d7bcdaf2a5ae2bb5325bb-rootfs.mount: Deactivated successfully. Jun 25 14:37:17.567427 containerd[1245]: time="2024-06-25T14:37:17.567366396Z" level=info msg="shim disconnected" id=9ea90d3a7418227eb1b83bcc5f3506ea72704a32be7d7bcdaf2a5ae2bb5325bb namespace=k8s.io Jun 25 14:37:17.567427 containerd[1245]: time="2024-06-25T14:37:17.567418156Z" level=warning msg="cleaning up after shim disconnected" id=9ea90d3a7418227eb1b83bcc5f3506ea72704a32be7d7bcdaf2a5ae2bb5325bb namespace=k8s.io Jun 25 14:37:17.567427 containerd[1245]: time="2024-06-25T14:37:17.567427276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:37:17.570041 kubelet[2249]: E0625 14:37:17.568660 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kfl4t" podUID="d97e3989-35c8-44ea-83c9-925e939d51bb" Jun 25 14:37:17.573610 kubelet[2249]: I0625 14:37:17.573581 2249 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 14:37:17.594942 kubelet[2249]: I0625 14:37:17.594879 2249 topology_manager.go:215] "Topology Admit Handler" podUID="77c6372f-63bb-45d5-91a8-a2813fbef04f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rn8b9" Jun 25 14:37:17.595134 kubelet[2249]: I0625 14:37:17.595094 2249 topology_manager.go:215] "Topology Admit Handler" podUID="92fe97e2-6b14-42a5-83ef-fce155119efa" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6snbw" Jun 25 14:37:17.595210 kubelet[2249]: I0625 14:37:17.595184 2249 topology_manager.go:215] "Topology Admit Handler" podUID="1ad4c167-f4c1-437b-b169-12ec098e308e" podNamespace="calico-system" podName="calico-kube-controllers-567786b6b9-gh9kf" Jun 25 14:37:17.605126 systemd[1]: Created slice kubepods-besteffort-pod1ad4c167_f4c1_437b_b169_12ec098e308e.slice - libcontainer container kubepods-besteffort-pod1ad4c167_f4c1_437b_b169_12ec098e308e.slice. Jun 25 14:37:17.611801 systemd[1]: Created slice kubepods-burstable-pod77c6372f_63bb_45d5_91a8_a2813fbef04f.slice - libcontainer container kubepods-burstable-pod77c6372f_63bb_45d5_91a8_a2813fbef04f.slice. Jun 25 14:37:17.617788 systemd[1]: Created slice kubepods-burstable-pod92fe97e2_6b14_42a5_83ef_fce155119efa.slice - libcontainer container kubepods-burstable-pod92fe97e2_6b14_42a5_83ef_fce155119efa.slice. Jun 25 14:37:17.620563 kubelet[2249]: E0625 14:37:17.620533 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:17.621479 containerd[1245]: time="2024-06-25T14:37:17.621390771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 14:37:17.769804 kubelet[2249]: I0625 14:37:17.769652 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxrn7\" (UniqueName: \"kubernetes.io/projected/1ad4c167-f4c1-437b-b169-12ec098e308e-kube-api-access-vxrn7\") pod \"calico-kube-controllers-567786b6b9-gh9kf\" (UID: \"1ad4c167-f4c1-437b-b169-12ec098e308e\") " pod="calico-system/calico-kube-controllers-567786b6b9-gh9kf" Jun 25 14:37:17.770015 kubelet[2249]: I0625 14:37:17.769969 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77c6372f-63bb-45d5-91a8-a2813fbef04f-config-volume\") pod \"coredns-7db6d8ff4d-rn8b9\" (UID: \"77c6372f-63bb-45d5-91a8-a2813fbef04f\") " pod="kube-system/coredns-7db6d8ff4d-rn8b9" Jun 25 14:37:17.770122 kubelet[2249]: I0625 14:37:17.770108 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92fe97e2-6b14-42a5-83ef-fce155119efa-config-volume\") pod \"coredns-7db6d8ff4d-6snbw\" (UID: \"92fe97e2-6b14-42a5-83ef-fce155119efa\") " pod="kube-system/coredns-7db6d8ff4d-6snbw" Jun 25 14:37:17.770233 kubelet[2249]: I0625 14:37:17.770216 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbxzf\" (UniqueName: \"kubernetes.io/projected/92fe97e2-6b14-42a5-83ef-fce155119efa-kube-api-access-hbxzf\") pod \"coredns-7db6d8ff4d-6snbw\" (UID: \"92fe97e2-6b14-42a5-83ef-fce155119efa\") " pod="kube-system/coredns-7db6d8ff4d-6snbw" Jun 25 14:37:17.770390 kubelet[2249]: I0625 14:37:17.770374 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ad4c167-f4c1-437b-b169-12ec098e308e-tigera-ca-bundle\") pod \"calico-kube-controllers-567786b6b9-gh9kf\" (UID: \"1ad4c167-f4c1-437b-b169-12ec098e308e\") " pod="calico-system/calico-kube-controllers-567786b6b9-gh9kf" Jun 25 14:37:17.771050 kubelet[2249]: I0625 14:37:17.771030 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc4xm\" (UniqueName: \"kubernetes.io/projected/77c6372f-63bb-45d5-91a8-a2813fbef04f-kube-api-access-xc4xm\") pod \"coredns-7db6d8ff4d-rn8b9\" (UID: \"77c6372f-63bb-45d5-91a8-a2813fbef04f\") " pod="kube-system/coredns-7db6d8ff4d-rn8b9" Jun 25 14:37:17.908685 containerd[1245]: time="2024-06-25T14:37:17.908585861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567786b6b9-gh9kf,Uid:1ad4c167-f4c1-437b-b169-12ec098e308e,Namespace:calico-system,Attempt:0,}" Jun 25 14:37:17.917594 kubelet[2249]: E0625 14:37:17.917562 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:17.922193 kubelet[2249]: E0625 14:37:17.922165 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:17.937458 containerd[1245]: time="2024-06-25T14:37:17.922757635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6snbw,Uid:92fe97e2-6b14-42a5-83ef-fce155119efa,Namespace:kube-system,Attempt:0,}" Jun 25 14:37:17.937589 containerd[1245]: time="2024-06-25T14:37:17.931849404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rn8b9,Uid:77c6372f-63bb-45d5-91a8-a2813fbef04f,Namespace:kube-system,Attempt:0,}" Jun 25 14:37:18.233851 containerd[1245]: time="2024-06-25T14:37:18.233777935Z" level=error msg="Failed to destroy network for sandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.234244 containerd[1245]: time="2024-06-25T14:37:18.234190535Z" level=error msg="encountered an error cleaning up failed sandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.234293 containerd[1245]: time="2024-06-25T14:37:18.234261495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6snbw,Uid:92fe97e2-6b14-42a5-83ef-fce155119efa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.236736 containerd[1245]: time="2024-06-25T14:37:18.236117577Z" level=error msg="Failed to destroy network for sandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.237426 kubelet[2249]: E0625 14:37:18.237383 2249 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.237606 kubelet[2249]: E0625 14:37:18.237462 2249 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6snbw" Jun 25 14:37:18.237606 kubelet[2249]: E0625 14:37:18.237485 2249 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6snbw" Jun 25 14:37:18.237606 kubelet[2249]: E0625 14:37:18.237531 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6snbw_kube-system(92fe97e2-6b14-42a5-83ef-fce155119efa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6snbw_kube-system(92fe97e2-6b14-42a5-83ef-fce155119efa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6snbw" podUID="92fe97e2-6b14-42a5-83ef-fce155119efa" Jun 25 14:37:18.239039 containerd[1245]: time="2024-06-25T14:37:18.238190979Z" level=error msg="encountered an error cleaning up failed sandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.239039 containerd[1245]: time="2024-06-25T14:37:18.238270139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567786b6b9-gh9kf,Uid:1ad4c167-f4c1-437b-b169-12ec098e308e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.239449 kubelet[2249]: E0625 14:37:18.239408 2249 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.239517 kubelet[2249]: E0625 14:37:18.239467 2249 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-567786b6b9-gh9kf" Jun 25 14:37:18.239517 kubelet[2249]: E0625 14:37:18.239487 2249 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-567786b6b9-gh9kf" Jun 25 14:37:18.239577 kubelet[2249]: E0625 14:37:18.239520 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-567786b6b9-gh9kf_calico-system(1ad4c167-f4c1-437b-b169-12ec098e308e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-567786b6b9-gh9kf_calico-system(1ad4c167-f4c1-437b-b169-12ec098e308e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-567786b6b9-gh9kf" podUID="1ad4c167-f4c1-437b-b169-12ec098e308e" Jun 25 14:37:18.242999 containerd[1245]: time="2024-06-25T14:37:18.242930783Z" level=error msg="Failed to destroy network for sandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.243649 containerd[1245]: time="2024-06-25T14:37:18.243610664Z" level=error msg="encountered an error cleaning up failed sandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.243738 containerd[1245]: time="2024-06-25T14:37:18.243675864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rn8b9,Uid:77c6372f-63bb-45d5-91a8-a2813fbef04f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.243919 kubelet[2249]: E0625 14:37:18.243889 2249 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.243990 kubelet[2249]: E0625 14:37:18.243936 2249 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rn8b9" Jun 25 14:37:18.243990 kubelet[2249]: E0625 14:37:18.243956 2249 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rn8b9" Jun 25 14:37:18.244055 kubelet[2249]: E0625 14:37:18.244009 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rn8b9_kube-system(77c6372f-63bb-45d5-91a8-a2813fbef04f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rn8b9_kube-system(77c6372f-63bb-45d5-91a8-a2813fbef04f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rn8b9" podUID="77c6372f-63bb-45d5-91a8-a2813fbef04f" Jun 25 14:37:18.622982 kubelet[2249]: I0625 14:37:18.622946 2249 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:18.623875 kubelet[2249]: I0625 14:37:18.623827 2249 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:18.624649 kubelet[2249]: I0625 14:37:18.624628 2249 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:18.629592 containerd[1245]: time="2024-06-25T14:37:18.629530430Z" level=info msg="StopPodSandbox for \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\"" Jun 25 14:37:18.630190 containerd[1245]: time="2024-06-25T14:37:18.629789230Z" level=info msg="Ensure that sandbox 332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f in task-service has been cleanup successfully" Jun 25 14:37:18.630552 containerd[1245]: time="2024-06-25T14:37:18.630504631Z" level=info msg="StopPodSandbox for \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\"" Jun 25 14:37:18.630743 containerd[1245]: time="2024-06-25T14:37:18.630722311Z" level=info msg="Ensure that sandbox e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c in task-service has been cleanup successfully" Jun 25 14:37:18.633247 containerd[1245]: time="2024-06-25T14:37:18.633191113Z" level=info msg="StopPodSandbox for \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\"" Jun 25 14:37:18.633634 containerd[1245]: time="2024-06-25T14:37:18.633600833Z" level=info msg="Ensure that sandbox f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778 in task-service has been cleanup successfully" Jun 25 14:37:18.669960 containerd[1245]: time="2024-06-25T14:37:18.669904508Z" level=error msg="StopPodSandbox for \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\" failed" error="failed to destroy network for sandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.674943 kubelet[2249]: E0625 14:37:18.674776 2249 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:18.674943 kubelet[2249]: E0625 14:37:18.674861 2249 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c"} Jun 25 14:37:18.675342 kubelet[2249]: E0625 14:37:18.675276 2249 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ad4c167-f4c1-437b-b169-12ec098e308e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:37:18.675342 kubelet[2249]: E0625 14:37:18.675308 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ad4c167-f4c1-437b-b169-12ec098e308e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-567786b6b9-gh9kf" podUID="1ad4c167-f4c1-437b-b169-12ec098e308e" Jun 25 14:37:18.677290 containerd[1245]: time="2024-06-25T14:37:18.677233795Z" level=error msg="StopPodSandbox for \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\" failed" error="failed to destroy network for sandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.677737 kubelet[2249]: E0625 14:37:18.677629 2249 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:18.677737 kubelet[2249]: E0625 14:37:18.677685 2249 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f"} Jun 25 14:37:18.677737 kubelet[2249]: E0625 14:37:18.677717 2249 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77c6372f-63bb-45d5-91a8-a2813fbef04f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:37:18.677893 kubelet[2249]: E0625 14:37:18.677739 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77c6372f-63bb-45d5-91a8-a2813fbef04f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rn8b9" podUID="77c6372f-63bb-45d5-91a8-a2813fbef04f" Jun 25 14:37:18.699596 containerd[1245]: time="2024-06-25T14:37:18.699534256Z" level=error msg="StopPodSandbox for \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\" failed" error="failed to destroy network for sandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:18.700102 kubelet[2249]: E0625 14:37:18.700009 2249 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:18.700320 kubelet[2249]: E0625 14:37:18.700225 2249 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778"} Jun 25 14:37:18.700320 kubelet[2249]: E0625 14:37:18.700270 2249 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92fe97e2-6b14-42a5-83ef-fce155119efa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:37:18.700320 kubelet[2249]: E0625 14:37:18.700293 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92fe97e2-6b14-42a5-83ef-fce155119efa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6snbw" podUID="92fe97e2-6b14-42a5-83ef-fce155119efa" Jun 25 14:37:18.796074 systemd[1]: Started sshd@7-10.0.0.122:22-10.0.0.1:38774.service - OpenSSH per-connection server daemon (10.0.0.1:38774). Jun 25 14:37:18.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.122:22-10.0.0.1:38774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:18.837000 audit[3204]: USER_ACCT pid=3204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:18.839099 sshd[3204]: Accepted publickey for core from 10.0.0.1 port 38774 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:18.838000 audit[3204]: CRED_ACQ pid=3204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:18.838000 audit[3204]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7f37420 a2=3 a3=1 items=0 ppid=1 pid=3204 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:18.838000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:18.840630 sshd[3204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:18.847035 systemd-logind[1235]: New session 8 of user core. Jun 25 14:37:18.866259 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 14:37:18.870000 audit[3204]: USER_START pid=3204 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:18.873000 audit[3206]: CRED_ACQ pid=3206 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:18.900142 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778-shm.mount: Deactivated successfully. Jun 25 14:37:18.900234 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f-shm.mount: Deactivated successfully. Jun 25 14:37:18.900286 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c-shm.mount: Deactivated successfully. Jun 25 14:37:19.036903 sshd[3204]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:19.036000 audit[3204]: USER_END pid=3204 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:19.036000 audit[3204]: CRED_DISP pid=3204 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:19.040540 systemd[1]: sshd@7-10.0.0.122:22-10.0.0.1:38774.service: Deactivated successfully. Jun 25 14:37:19.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.122:22-10.0.0.1:38774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:19.041415 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 14:37:19.042063 systemd-logind[1235]: Session 8 logged out. Waiting for processes to exit. Jun 25 14:37:19.042864 systemd-logind[1235]: Removed session 8. Jun 25 14:37:19.532711 systemd[1]: Created slice kubepods-besteffort-podd97e3989_35c8_44ea_83c9_925e939d51bb.slice - libcontainer container kubepods-besteffort-podd97e3989_35c8_44ea_83c9_925e939d51bb.slice. Jun 25 14:37:19.535419 containerd[1245]: time="2024-06-25T14:37:19.535379456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfl4t,Uid:d97e3989-35c8-44ea-83c9-925e939d51bb,Namespace:calico-system,Attempt:0,}" Jun 25 14:37:19.610307 containerd[1245]: time="2024-06-25T14:37:19.610245402Z" level=error msg="Failed to destroy network for sandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:19.612375 containerd[1245]: time="2024-06-25T14:37:19.610598483Z" level=error msg="encountered an error cleaning up failed sandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:19.612375 containerd[1245]: time="2024-06-25T14:37:19.610654523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfl4t,Uid:d97e3989-35c8-44ea-83c9-925e939d51bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:19.612269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a-shm.mount: Deactivated successfully. Jun 25 14:37:19.612524 kubelet[2249]: E0625 14:37:19.611119 2249 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:19.612524 kubelet[2249]: E0625 14:37:19.611174 2249 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfl4t" Jun 25 14:37:19.612524 kubelet[2249]: E0625 14:37:19.611202 2249 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kfl4t" Jun 25 14:37:19.612633 kubelet[2249]: E0625 14:37:19.611239 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kfl4t_calico-system(d97e3989-35c8-44ea-83c9-925e939d51bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kfl4t_calico-system(d97e3989-35c8-44ea-83c9-925e939d51bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kfl4t" podUID="d97e3989-35c8-44ea-83c9-925e939d51bb" Jun 25 14:37:19.626933 kubelet[2249]: I0625 14:37:19.626882 2249 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:19.628857 containerd[1245]: time="2024-06-25T14:37:19.628787219Z" level=info msg="StopPodSandbox for \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\"" Jun 25 14:37:19.629058 containerd[1245]: time="2024-06-25T14:37:19.629031499Z" level=info msg="Ensure that sandbox fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a in task-service has been cleanup successfully" Jun 25 14:37:19.670676 containerd[1245]: time="2024-06-25T14:37:19.670600936Z" level=error msg="StopPodSandbox for \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\" failed" error="failed to destroy network for sandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:37:19.671075 kubelet[2249]: E0625 14:37:19.671035 2249 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:19.671174 kubelet[2249]: E0625 14:37:19.671088 2249 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a"} Jun 25 14:37:19.671174 kubelet[2249]: E0625 14:37:19.671129 2249 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d97e3989-35c8-44ea-83c9-925e939d51bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:37:19.671174 kubelet[2249]: E0625 14:37:19.671151 2249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d97e3989-35c8-44ea-83c9-925e939d51bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kfl4t" podUID="d97e3989-35c8-44ea-83c9-925e939d51bb" Jun 25 14:37:21.159213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328880230.mount: Deactivated successfully. Jun 25 14:37:21.382987 containerd[1245]: time="2024-06-25T14:37:21.382904360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:21.384923 containerd[1245]: time="2024-06-25T14:37:21.384829761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jun 25 14:37:21.385913 containerd[1245]: time="2024-06-25T14:37:21.385875642Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:21.387552 containerd[1245]: time="2024-06-25T14:37:21.387516284Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:21.401244 containerd[1245]: time="2024-06-25T14:37:21.401174574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.779690883s" Jun 25 14:37:21.401467 containerd[1245]: time="2024-06-25T14:37:21.401440574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jun 25 14:37:21.413209 containerd[1245]: time="2024-06-25T14:37:21.412922783Z" level=info msg="CreateContainer within sandbox \"15bd2b2502bdbc58b2574d37da83ac9a5199b89cb89f858967e06ba46c4ba9bd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 14:37:21.419130 containerd[1245]: time="2024-06-25T14:37:21.419065148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:21.433778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1120959611.mount: Deactivated successfully. Jun 25 14:37:21.440792 containerd[1245]: time="2024-06-25T14:37:21.440742965Z" level=info msg="CreateContainer within sandbox \"15bd2b2502bdbc58b2574d37da83ac9a5199b89cb89f858967e06ba46c4ba9bd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a0061c2f9ee27a4f2d6ca9e863ad6c4eec15ef4cd7d6f38a3b5f55516d4474ec\"" Jun 25 14:37:21.441476 containerd[1245]: time="2024-06-25T14:37:21.441451766Z" level=info msg="StartContainer for \"a0061c2f9ee27a4f2d6ca9e863ad6c4eec15ef4cd7d6f38a3b5f55516d4474ec\"" Jun 25 14:37:21.500192 systemd[1]: Started cri-containerd-a0061c2f9ee27a4f2d6ca9e863ad6c4eec15ef4cd7d6f38a3b5f55516d4474ec.scope - libcontainer container a0061c2f9ee27a4f2d6ca9e863ad6c4eec15ef4cd7d6f38a3b5f55516d4474ec. Jun 25 14:37:21.513000 audit: BPF prog-id=135 op=LOAD Jun 25 14:37:21.513000 audit[3291]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2728 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:21.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130303631633266396565323761346632643663613965383633616436 Jun 25 14:37:21.514000 audit: BPF prog-id=136 op=LOAD Jun 25 14:37:21.514000 audit[3291]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2728 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:21.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130303631633266396565323761346632643663613965383633616436 Jun 25 14:37:21.514000 audit: BPF prog-id=136 op=UNLOAD Jun 25 14:37:21.514000 audit: BPF prog-id=135 op=UNLOAD Jun 25 14:37:21.514000 audit: BPF prog-id=137 op=LOAD Jun 25 14:37:21.514000 audit[3291]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2728 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:21.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130303631633266396565323761346632643663613965383633616436 Jun 25 14:37:21.597781 containerd[1245]: time="2024-06-25T14:37:21.597722808Z" level=info msg="StartContainer for \"a0061c2f9ee27a4f2d6ca9e863ad6c4eec15ef4cd7d6f38a3b5f55516d4474ec\" returns successfully" Jun 25 14:37:21.654423 kubelet[2249]: E0625 14:37:21.654375 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:21.670829 kubelet[2249]: I0625 14:37:21.670700 2249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nnkxj" podStartSLOduration=1.237072361 podStartE2EDuration="14.670684265s" podCreationTimestamp="2024-06-25 14:37:07 +0000 UTC" firstStartedPulling="2024-06-25 14:37:07.968605591 +0000 UTC m=+21.533792818" lastFinishedPulling="2024-06-25 14:37:21.402217495 +0000 UTC m=+34.967404722" observedRunningTime="2024-06-25 14:37:21.670499984 +0000 UTC m=+35.235687211" watchObservedRunningTime="2024-06-25 14:37:21.670684265 +0000 UTC m=+35.235871492" Jun 25 14:37:21.756752 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 14:37:21.756903 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 14:37:22.655830 kubelet[2249]: E0625 14:37:22.655785 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:22.696571 systemd[1]: run-containerd-runc-k8s.io-a0061c2f9ee27a4f2d6ca9e863ad6c4eec15ef4cd7d6f38a3b5f55516d4474ec-runc.BNUr6K.mount: Deactivated successfully. Jun 25 14:37:23.039000 audit[3451]: AVC avc: denied { write } for pid=3451 comm="tee" name="fd" dev="proc" ino=20767 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:37:23.042927 kernel: kauditd_printk_skb: 24 callbacks suppressed Jun 25 14:37:23.043092 kernel: audit: type=1400 audit(1719326243.039:525): avc: denied { write } for pid=3451 comm="tee" name="fd" dev="proc" ino=20767 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:37:23.043122 kernel: audit: type=1300 audit(1719326243.039:525): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdce58a2f a2=241 a3=1b6 items=1 ppid=3411 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:23.039000 audit[3451]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdce58a2f a2=241 a3=1b6 items=1 ppid=3411 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:23.045898 kernel: audit: type=1307 audit(1719326243.039:525): cwd="/etc/service/enabled/bird/log" Jun 25 14:37:23.039000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 14:37:23.046666 kernel: audit: type=1302 audit(1719326243.039:525): item=0 name="/dev/fd/63" inode=20760 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:37:23.039000 audit: PATH item=0 name="/dev/fd/63" inode=20760 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:37:23.048753 kernel: audit: type=1327 audit(1719326243.039:525): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:37:23.039000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:37:23.050581 kernel: audit: type=1400 audit(1719326243.040:526): avc: denied { write } for pid=3464 comm="tee" name="fd" dev="proc" ino=17393 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:37:23.040000 audit[3464]: AVC avc: denied { write } for pid=3464 comm="tee" name="fd" dev="proc" ino=17393 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:37:23.052574 kernel: audit: type=1300 audit(1719326243.040:526): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc4882a2e a2=241 a3=1b6 items=1 ppid=3404 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:23.040000 audit[3464]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc4882a2e a2=241 a3=1b6 items=1 ppid=3404 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:23.055480 kernel: audit: type=1307 audit(1719326243.040:526): cwd="/etc/service/enabled/felix/log" Jun 25 14:37:23.040000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 14:37:23.056380 kernel: audit: type=1302 audit(1719326243.040:526): item=0 name="/dev/fd/63" inode=18951 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:37:23.040000 audit: PATH item=0 name="/dev/fd/63" inode=18951 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:37:23.058421 kernel: audit: type=1327 audit(1719326243.040:526): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:37:23.040000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:37:23.041000 audit[3461]: AVC avc: denied { write } for pid=3461 comm="tee" name="fd" dev="proc" ino=20771 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:37:23.041000 audit[3461]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc1263a1e a2=241 a3=1b6 items=1 ppid=3412 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:23.041000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 14:37:23.041000 audit: PATH item=0 name="/dev/fd/63" inode=20764 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:37:23.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:37:23.049000 audit[3469]: AVC avc: denied { write } for pid=3469 comm="tee" name="fd" dev="proc" ino=17397 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:37:23.049000 audit[3469]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff4bbfa2e a2=241 a3=1b6 items=1 ppid=3408 pid=3469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:23.049000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 14:37:23.049000 audit: PATH item=0 name="/dev/fd/63" inode=20775 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:37:23.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:37:23.067000 audit[3458]: AVC avc: denied { write } for pid=3458 comm="tee" name="fd" dev="proc" ino=18956 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:37:23.067000 audit[3458]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc4306a30 a2=241 a3=1b6 items=1 ppid=3406 pid=3458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:23.067000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 14:37:23.067000 audit: PATH item=0 name="/dev/fd/63" inode=20761 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:37:23.067000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:37:23.079000 audit[3478]: AVC avc: denied { write } for pid=3478 comm="tee" name="fd" dev="proc" ino=19678 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:37:23.079000 audit[3478]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffca939a1f a2=241 a3=1b6 items=1 ppid=3415 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:23.079000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 14:37:23.079000 audit: PATH item=0 name="/dev/fd/63" inode=20782 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:37:23.079000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:37:23.101000 audit[3483]: AVC avc: denied { write } for pid=3483 comm="tee" name="fd" dev="proc" ino=21509 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:37:23.101000 audit[3483]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd700ea2e a2=241 a3=1b6 items=1 ppid=3419 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:23.101000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 14:37:23.101000 audit: PATH item=0 name="/dev/fd/63" inode=19679 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:37:23.101000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:37:23.662559 kubelet[2249]: E0625 14:37:23.662523 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:24.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.122:22-10.0.0.1:40918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:24.050863 systemd[1]: Started sshd@8-10.0.0.122:22-10.0.0.1:40918.service - OpenSSH per-connection server daemon (10.0.0.1:40918). Jun 25 14:37:24.079265 kubelet[2249]: I0625 14:37:24.079101 2249 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:37:24.088448 kubelet[2249]: E0625 14:37:24.086739 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:24.097000 audit[3520]: USER_ACCT pid=3520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:24.098433 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 40918 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:24.099000 audit[3520]: CRED_ACQ pid=3520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:24.099000 audit[3520]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffed3f56e0 a2=3 a3=1 items=0 ppid=1 pid=3520 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:24.099000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:24.101361 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:24.106820 systemd-logind[1235]: New session 9 of user core. Jun 25 14:37:24.111202 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 14:37:24.123000 audit[3520]: USER_START pid=3520 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:24.125000 audit[3523]: CRED_ACQ pid=3523 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:24.165000 audit[3536]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:24.165000 audit[3536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffc0ca2650 a2=0 a3=1 items=0 ppid=2410 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:24.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:24.167000 audit[3536]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:24.167000 audit[3536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc0ca2650 a2=0 a3=1 items=0 ppid=2410 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:24.167000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:24.396070 sshd[3520]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:24.396000 audit[3520]: USER_END pid=3520 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:24.397000 audit[3520]: CRED_DISP pid=3520 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:24.400933 systemd[1]: sshd@8-10.0.0.122:22-10.0.0.1:40918.service: Deactivated successfully. Jun 25 14:37:24.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.122:22-10.0.0.1:40918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:24.401775 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 14:37:24.402336 systemd-logind[1235]: Session 9 logged out. Waiting for processes to exit. Jun 25 14:37:24.403166 systemd-logind[1235]: Removed session 9. Jun 25 14:37:24.490920 systemd-networkd[1078]: vxlan.calico: Link UP Jun 25 14:37:24.490927 systemd-networkd[1078]: vxlan.calico: Gained carrier Jun 25 14:37:24.516000 audit: BPF prog-id=138 op=LOAD Jun 25 14:37:24.516000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffef929538 a2=70 a3=ffffef9295a8 items=0 ppid=3554 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:24.516000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:37:24.516000 audit: BPF prog-id=138 op=UNLOAD Jun 25 14:37:24.516000 audit: BPF prog-id=139 op=LOAD Jun 25 14:37:24.516000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffef929538 a2=70 a3=4b243c items=0 ppid=3554 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:24.516000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:37:24.516000 audit: BPF prog-id=139 op=UNLOAD Jun 25 14:37:24.516000 audit: BPF prog-id=140 op=LOAD Jun 25 14:37:24.516000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffef9294d8 a2=70 a3=ffffef929548 items=0 ppid=3554 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:24.516000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:37:24.517000 audit: BPF prog-id=140 op=UNLOAD Jun 25 14:37:24.517000 audit: BPF prog-id=141 op=LOAD Jun 25 14:37:24.517000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffef929508 a2=70 a3=1e0984a9 items=0 ppid=3554 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:24.517000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:37:24.533000 audit: BPF prog-id=141 op=UNLOAD Jun 25 14:37:24.581000 audit[3659]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3659 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:37:24.581000 audit[3659]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffcc6e14b0 a2=0 a3=ffff85d3ffa8 items=0 ppid=3554 pid=3659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:24.581000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:37:24.583000 audit[3658]: NETFILTER_CFG table=raw:98 family=2 entries=19 op=nft_register_chain pid=3658 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:37:24.583000 audit[3658]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6992 a0=3 a1=fffff17761b0 a2=0 a3=ffffa5ff9fa8 items=0 ppid=3554 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:24.583000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:37:24.587000 audit[3657]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=3657 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:37:24.587000 audit[3657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc631f240 a2=0 a3=ffff921e4fa8 items=0 ppid=3554 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:24.587000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:37:24.590000 audit[3662]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=3662 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:37:24.590000 audit[3662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=fffffdf68100 a2=0 a3=ffffbb37dfa8 items=0 ppid=3554 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:24.590000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:37:24.663902 kubelet[2249]: E0625 14:37:24.663598 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:26.464095 systemd-networkd[1078]: vxlan.calico: Gained IPv6LL Jun 25 14:37:29.430452 systemd[1]: Started sshd@9-10.0.0.122:22-10.0.0.1:59892.service - OpenSSH per-connection server daemon (10.0.0.1:59892). Jun 25 14:37:29.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.122:22-10.0.0.1:59892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:29.431485 kernel: kauditd_printk_skb: 70 callbacks suppressed Jun 25 14:37:29.431573 kernel: audit: type=1130 audit(1719326249.429:555): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.122:22-10.0.0.1:59892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:29.481342 sshd[3671]: Accepted publickey for core from 10.0.0.1 port 59892 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:29.480000 audit[3671]: USER_ACCT pid=3671 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.483355 sshd[3671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:29.484065 kernel: audit: type=1101 audit(1719326249.480:556): pid=3671 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.484170 kernel: audit: type=1103 audit(1719326249.481:557): pid=3671 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.481000 audit[3671]: CRED_ACQ pid=3671 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.487693 kernel: audit: type=1006 audit(1719326249.482:558): pid=3671 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 14:37:29.487795 kernel: audit: type=1300 audit(1719326249.482:558): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd0573190 a2=3 a3=1 items=0 ppid=1 pid=3671 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:29.482000 audit[3671]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd0573190 a2=3 a3=1 items=0 ppid=1 pid=3671 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:29.490131 kernel: audit: type=1327 audit(1719326249.482:558): proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:29.482000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:29.495072 systemd-logind[1235]: New session 10 of user core. Jun 25 14:37:29.502157 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 14:37:29.504000 audit[3671]: USER_START pid=3671 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.506000 audit[3673]: CRED_ACQ pid=3673 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.510959 kernel: audit: type=1105 audit(1719326249.504:559): pid=3671 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.511049 kernel: audit: type=1103 audit(1719326249.506:560): pid=3673 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.528708 containerd[1245]: time="2024-06-25T14:37:29.528489223Z" level=info msg="StopPodSandbox for \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\"" Jun 25 14:37:29.742490 sshd[3671]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:29.743000 audit[3671]: USER_END pid=3671 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.743000 audit[3671]: CRED_DISP pid=3671 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.750866 kernel: audit: type=1106 audit(1719326249.743:561): pid=3671 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.750948 kernel: audit: type=1104 audit(1719326249.743:562): pid=3671 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.122:22-10.0.0.1:59892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:29.753222 systemd[1]: sshd@9-10.0.0.122:22-10.0.0.1:59892.service: Deactivated successfully. Jun 25 14:37:29.753907 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 14:37:29.754828 systemd-logind[1235]: Session 10 logged out. Waiting for processes to exit. Jun 25 14:37:29.762865 systemd[1]: Started sshd@10-10.0.0.122:22-10.0.0.1:59906.service - OpenSSH per-connection server daemon (10.0.0.1:59906). Jun 25 14:37:29.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.122:22-10.0.0.1:59906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:29.764648 systemd-logind[1235]: Removed session 10. Jun 25 14:37:29.803000 audit[3723]: USER_ACCT pid=3723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.804442 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 59906 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:29.804000 audit[3723]: CRED_ACQ pid=3723 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.804000 audit[3723]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff32fc660 a2=3 a3=1 items=0 ppid=1 pid=3723 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:29.804000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:29.805920 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:29.813110 systemd-logind[1235]: New session 11 of user core. Jun 25 14:37:29.822196 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 14:37:29.827000 audit[3723]: USER_START pid=3723 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.829000 audit[3726]: CRED_ACQ pid=3726 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.622 [INFO][3690] k8s.go 608: Cleaning up netns ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.623 [INFO][3690] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" iface="eth0" netns="/var/run/netns/cni-19d6559a-2e9c-f955-4869-97263dfdbaf5" Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.624 [INFO][3690] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" iface="eth0" netns="/var/run/netns/cni-19d6559a-2e9c-f955-4869-97263dfdbaf5" Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.628 [INFO][3690] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" iface="eth0" netns="/var/run/netns/cni-19d6559a-2e9c-f955-4869-97263dfdbaf5" Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.628 [INFO][3690] k8s.go 615: Releasing IP address(es) ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.628 [INFO][3690] utils.go 188: Calico CNI releasing IP address ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.812 [INFO][3715] ipam_plugin.go 411: Releasing address using handleID ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" HandleID="k8s-pod-network.f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Workload="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.812 [INFO][3715] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.812 [INFO][3715] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.827 [WARNING][3715] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" HandleID="k8s-pod-network.f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Workload="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.827 [INFO][3715] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" HandleID="k8s-pod-network.f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Workload="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.829 [INFO][3715] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:29.833079 containerd[1245]: 2024-06-25 14:37:29.831 [INFO][3690] k8s.go 621: Teardown processing complete. ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:29.835679 containerd[1245]: time="2024-06-25T14:37:29.835637126Z" level=info msg="TearDown network for sandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\" successfully" Jun 25 14:37:29.835787 containerd[1245]: time="2024-06-25T14:37:29.835770526Z" level=info msg="StopPodSandbox for \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\" returns successfully" Jun 25 14:37:29.838683 kubelet[2249]: E0625 14:37:29.837368 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:29.839108 containerd[1245]: time="2024-06-25T14:37:29.838268328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6snbw,Uid:92fe97e2-6b14-42a5-83ef-fce155119efa,Namespace:kube-system,Attempt:1,}" Jun 25 14:37:29.837392 systemd[1]: run-netns-cni\x2d19d6559a\x2d2e9c\x2df955\x2d4869\x2d97263dfdbaf5.mount: Deactivated successfully. Jun 25 14:37:29.992089 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:37:29.992244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calief5adfbae88: link becomes ready Jun 25 14:37:29.992420 systemd-networkd[1078]: calief5adfbae88: Link UP Jun 25 14:37:29.992556 systemd-networkd[1078]: calief5adfbae88: Gained carrier Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.892 [INFO][3727] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0 coredns-7db6d8ff4d- kube-system 92fe97e2-6b14-42a5-83ef-fce155119efa 777 0 2024-06-25 14:37:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-6snbw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calief5adfbae88 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6snbw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6snbw-" Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.893 [INFO][3727] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6snbw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.935 [INFO][3746] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" HandleID="k8s-pod-network.5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" Workload="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.947 [INFO][3746] ipam_plugin.go 264: Auto assigning IP ContainerID="5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" HandleID="k8s-pod-network.5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" Workload="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dbb10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-6snbw", "timestamp":"2024-06-25 14:37:29.935874093 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.949 [INFO][3746] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.949 [INFO][3746] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.949 [INFO][3746] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.954 [INFO][3746] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" host="localhost" Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.963 [INFO][3746] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.969 [INFO][3746] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.971 [INFO][3746] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.974 [INFO][3746] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.974 [INFO][3746] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" host="localhost" Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.976 [INFO][3746] ipam.go 1685: Creating new handle: k8s-pod-network.5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.980 [INFO][3746] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" host="localhost" Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.986 [INFO][3746] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" host="localhost" Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.986 [INFO][3746] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" host="localhost" Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.986 [INFO][3746] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:30.013104 containerd[1245]: 2024-06-25 14:37:29.986 [INFO][3746] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" HandleID="k8s-pod-network.5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" Workload="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:30.013710 containerd[1245]: 2024-06-25 14:37:29.988 [INFO][3727] k8s.go 386: Populated endpoint ContainerID="5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6snbw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92fe97e2-6b14-42a5-83ef-fce155119efa", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-6snbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief5adfbae88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:30.013710 containerd[1245]: 2024-06-25 14:37:29.989 [INFO][3727] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6snbw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:30.013710 containerd[1245]: 2024-06-25 14:37:29.989 [INFO][3727] dataplane_linux.go 68: Setting the host side veth name to calief5adfbae88 ContainerID="5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6snbw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:30.013710 containerd[1245]: 2024-06-25 14:37:29.992 [INFO][3727] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6snbw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:30.013710 containerd[1245]: 2024-06-25 14:37:29.992 [INFO][3727] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6snbw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92fe97e2-6b14-42a5-83ef-fce155119efa", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f", Pod:"coredns-7db6d8ff4d-6snbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief5adfbae88", MAC:"62:47:7c:64:78:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:30.013710 containerd[1245]: 2024-06-25 14:37:30.004 [INFO][3727] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6snbw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:30.020000 audit[3766]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3766 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:37:30.020000 audit[3766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=fffff1078ca0 a2=0 a3=ffff9f2fcfa8 items=0 ppid=3554 pid=3766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:30.020000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:37:30.039697 containerd[1245]: time="2024-06-25T14:37:30.039214340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:37:30.039697 containerd[1245]: time="2024-06-25T14:37:30.039319980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:30.039697 containerd[1245]: time="2024-06-25T14:37:30.039357380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:37:30.039697 containerd[1245]: time="2024-06-25T14:37:30.039398620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:30.062200 systemd[1]: Started cri-containerd-5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f.scope - libcontainer container 5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f. Jun 25 14:37:30.073000 audit: BPF prog-id=142 op=LOAD Jun 25 14:37:30.074000 audit: BPF prog-id=143 op=LOAD Jun 25 14:37:30.074000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=3780 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:30.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313266316263636564323066313431653839386436343531663531 Jun 25 14:37:30.074000 audit: BPF prog-id=144 op=LOAD Jun 25 14:37:30.074000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=3780 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:30.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313266316263636564323066313431653839386436343531663531 Jun 25 14:37:30.074000 audit: BPF prog-id=144 op=UNLOAD Jun 25 14:37:30.074000 audit: BPF prog-id=143 op=UNLOAD Jun 25 14:37:30.074000 audit: BPF prog-id=145 op=LOAD Jun 25 14:37:30.074000 audit[3791]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=3780 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:30.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313266316263636564323066313431653839386436343531663531 Jun 25 14:37:30.076397 systemd-resolved[1184]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:37:30.097187 sshd[3723]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:30.104946 systemd[1]: Started sshd@11-10.0.0.122:22-10.0.0.1:59922.service - OpenSSH per-connection server daemon (10.0.0.1:59922). Jun 25 14:37:30.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.122:22-10.0.0.1:59922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:30.112000 audit[3723]: USER_END pid=3723 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:30.112000 audit[3723]: CRED_DISP pid=3723 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:30.115329 systemd[1]: sshd@10-10.0.0.122:22-10.0.0.1:59906.service: Deactivated successfully. Jun 25 14:37:30.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.122:22-10.0.0.1:59906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:30.116120 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 14:37:30.119092 systemd-logind[1235]: Session 11 logged out. Waiting for processes to exit. Jun 25 14:37:30.120251 systemd-logind[1235]: Removed session 11. Jun 25 14:37:30.143903 containerd[1245]: time="2024-06-25T14:37:30.143857666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6snbw,Uid:92fe97e2-6b14-42a5-83ef-fce155119efa,Namespace:kube-system,Attempt:1,} returns sandbox id \"5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f\"" Jun 25 14:37:30.145753 kubelet[2249]: E0625 14:37:30.144865 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:30.146000 audit[3810]: USER_ACCT pid=3810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:30.147302 sshd[3810]: Accepted publickey for core from 10.0.0.1 port 59922 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:30.147000 audit[3810]: CRED_ACQ pid=3810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:30.147000 audit[3810]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffedbef820 a2=3 a3=1 items=0 ppid=1 pid=3810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:30.147000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:30.149561 sshd[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:30.150031 containerd[1245]: time="2024-06-25T14:37:30.149758508Z" level=info msg="CreateContainer within sandbox \"5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:37:30.154552 systemd-logind[1235]: New session 12 of user core. Jun 25 14:37:30.158170 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 14:37:30.161000 audit[3810]: USER_START pid=3810 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:30.163000 audit[3819]: CRED_ACQ pid=3819 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:30.166748 containerd[1245]: time="2024-06-25T14:37:30.166697836Z" level=info msg="CreateContainer within sandbox \"5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"af1fc46fb2aee0dfd3872ca0f1dd67705b761d6131eee4755cf454ebd898dc30\"" Jun 25 14:37:30.167388 containerd[1245]: time="2024-06-25T14:37:30.167292916Z" level=info msg="StartContainer for \"af1fc46fb2aee0dfd3872ca0f1dd67705b761d6131eee4755cf454ebd898dc30\"" Jun 25 14:37:30.189210 systemd[1]: Started cri-containerd-af1fc46fb2aee0dfd3872ca0f1dd67705b761d6131eee4755cf454ebd898dc30.scope - libcontainer container af1fc46fb2aee0dfd3872ca0f1dd67705b761d6131eee4755cf454ebd898dc30. Jun 25 14:37:30.198000 audit: BPF prog-id=146 op=LOAD Jun 25 14:37:30.199000 audit: BPF prog-id=147 op=LOAD Jun 25 14:37:30.199000 audit[3828]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3780 pid=3828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:30.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316663343666623261656530646664333837326361306631646436 Jun 25 14:37:30.199000 audit: BPF prog-id=148 op=LOAD Jun 25 14:37:30.199000 audit[3828]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3780 pid=3828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:30.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316663343666623261656530646664333837326361306631646436 Jun 25 14:37:30.199000 audit: BPF prog-id=148 op=UNLOAD Jun 25 14:37:30.199000 audit: BPF prog-id=147 op=UNLOAD Jun 25 14:37:30.199000 audit: BPF prog-id=149 op=LOAD Jun 25 14:37:30.199000 audit[3828]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3780 pid=3828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:30.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316663343666623261656530646664333837326361306631646436 Jun 25 14:37:30.217712 containerd[1245]: time="2024-06-25T14:37:30.217662378Z" level=info msg="StartContainer for \"af1fc46fb2aee0dfd3872ca0f1dd67705b761d6131eee4755cf454ebd898dc30\" returns successfully" Jun 25 14:37:30.416624 sshd[3810]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:30.416000 audit[3810]: USER_END pid=3810 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:30.417000 audit[3810]: CRED_DISP pid=3810 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:30.419727 systemd[1]: sshd@11-10.0.0.122:22-10.0.0.1:59922.service: Deactivated successfully. Jun 25 14:37:30.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.122:22-10.0.0.1:59922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:30.420481 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 14:37:30.421083 systemd-logind[1235]: Session 12 logged out. Waiting for processes to exit. Jun 25 14:37:30.421792 systemd-logind[1235]: Removed session 12. Jun 25 14:37:30.677031 kubelet[2249]: E0625 14:37:30.676034 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:30.689665 kubelet[2249]: I0625 14:37:30.689607 2249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6snbw" podStartSLOduration=29.689589744 podStartE2EDuration="29.689589744s" podCreationTimestamp="2024-06-25 14:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:37:30.688997224 +0000 UTC m=+44.254184451" watchObservedRunningTime="2024-06-25 14:37:30.689589744 +0000 UTC m=+44.254776971" Jun 25 14:37:30.705000 audit[3870]: NETFILTER_CFG table=filter:102 family=2 entries=14 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:30.705000 audit[3870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffdcb42fb0 a2=0 a3=1 items=0 ppid=2410 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:30.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:30.706000 audit[3870]: NETFILTER_CFG table=nat:103 family=2 entries=14 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:30.706000 audit[3870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffdcb42fb0 a2=0 a3=1 items=0 ppid=2410 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:30.706000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:30.748000 audit[3872]: NETFILTER_CFG table=filter:104 family=2 entries=11 op=nft_register_rule pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:30.748000 audit[3872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffff921d770 a2=0 a3=1 items=0 ppid=2410 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:30.748000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:30.749000 audit[3872]: NETFILTER_CFG table=nat:105 family=2 entries=35 op=nft_register_chain pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:30.749000 audit[3872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff921d770 a2=0 a3=1 items=0 ppid=2410 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:30.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:31.528471 containerd[1245]: time="2024-06-25T14:37:31.528414936Z" level=info msg="StopPodSandbox for \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\"" Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.580 [INFO][3894] k8s.go 608: Cleaning up netns ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.580 [INFO][3894] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" iface="eth0" netns="/var/run/netns/cni-92719dbf-661a-e533-371b-8b3a567f7ceb" Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.580 [INFO][3894] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" iface="eth0" netns="/var/run/netns/cni-92719dbf-661a-e533-371b-8b3a567f7ceb" Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.581 [INFO][3894] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" iface="eth0" netns="/var/run/netns/cni-92719dbf-661a-e533-371b-8b3a567f7ceb" Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.581 [INFO][3894] k8s.go 615: Releasing IP address(es) ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.581 [INFO][3894] utils.go 188: Calico CNI releasing IP address ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.604 [INFO][3901] ipam_plugin.go 411: Releasing address using handleID ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" HandleID="k8s-pod-network.e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Workload="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.604 [INFO][3901] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.604 [INFO][3901] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.615 [WARNING][3901] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" HandleID="k8s-pod-network.e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Workload="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.615 [INFO][3901] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" HandleID="k8s-pod-network.e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Workload="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.616 [INFO][3901] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:31.619454 containerd[1245]: 2024-06-25 14:37:31.618 [INFO][3894] k8s.go 621: Teardown processing complete. ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:31.621541 systemd[1]: run-netns-cni\x2d92719dbf\x2d661a\x2de533\x2d371b\x2d8b3a567f7ceb.mount: Deactivated successfully. Jun 25 14:37:31.624102 containerd[1245]: time="2024-06-25T14:37:31.624053535Z" level=info msg="TearDown network for sandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\" successfully" Jun 25 14:37:31.624256 containerd[1245]: time="2024-06-25T14:37:31.624235975Z" level=info msg="StopPodSandbox for \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\" returns successfully" Jun 25 14:37:31.625012 containerd[1245]: time="2024-06-25T14:37:31.624961855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567786b6b9-gh9kf,Uid:1ad4c167-f4c1-437b-b169-12ec098e308e,Namespace:calico-system,Attempt:1,}" Jun 25 14:37:31.679230 kubelet[2249]: E0625 14:37:31.677813 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:31.776879 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:37:31.777024 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliee9850568fc: link becomes ready Jun 25 14:37:31.775450 systemd-networkd[1078]: caliee9850568fc: Link UP Jun 25 14:37:31.778059 systemd-networkd[1078]: caliee9850568fc: Gained carrier Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.686 [INFO][3908] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0 calico-kube-controllers-567786b6b9- calico-system 1ad4c167-f4c1-437b-b169-12ec098e308e 826 0 2024-06-25 14:37:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:567786b6b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-567786b6b9-gh9kf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliee9850568fc [] []}} ContainerID="ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" Namespace="calico-system" Pod="calico-kube-controllers-567786b6b9-gh9kf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-" Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.686 [INFO][3908] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" Namespace="calico-system" Pod="calico-kube-controllers-567786b6b9-gh9kf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.723 [INFO][3920] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" HandleID="k8s-pod-network.ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" Workload="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.742 [INFO][3920] ipam_plugin.go 264: Auto assigning IP ContainerID="ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" HandleID="k8s-pod-network.ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" Workload="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000352cb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-567786b6b9-gh9kf", "timestamp":"2024-06-25 14:37:31.723789416 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.742 [INFO][3920] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.743 [INFO][3920] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.743 [INFO][3920] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.745 [INFO][3920] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" host="localhost" Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.750 [INFO][3920] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.756 [INFO][3920] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.758 [INFO][3920] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.760 [INFO][3920] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.760 [INFO][3920] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" host="localhost" Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.762 [INFO][3920] ipam.go 1685: Creating new handle: k8s-pod-network.ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300 Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.765 [INFO][3920] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" host="localhost" Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.770 [INFO][3920] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" host="localhost" Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.770 [INFO][3920] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" host="localhost" Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.770 [INFO][3920] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:31.794430 containerd[1245]: 2024-06-25 14:37:31.771 [INFO][3920] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" HandleID="k8s-pod-network.ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" Workload="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:31.795453 containerd[1245]: 2024-06-25 14:37:31.772 [INFO][3908] k8s.go 386: Populated endpoint ContainerID="ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" Namespace="calico-system" Pod="calico-kube-controllers-567786b6b9-gh9kf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0", GenerateName:"calico-kube-controllers-567786b6b9-", Namespace:"calico-system", SelfLink:"", UID:"1ad4c167-f4c1-437b-b169-12ec098e308e", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567786b6b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-567786b6b9-gh9kf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliee9850568fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:31.795453 containerd[1245]: 2024-06-25 14:37:31.773 [INFO][3908] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" Namespace="calico-system" Pod="calico-kube-controllers-567786b6b9-gh9kf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:31.795453 containerd[1245]: 2024-06-25 14:37:31.773 [INFO][3908] dataplane_linux.go 68: Setting the host side veth name to caliee9850568fc ContainerID="ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" Namespace="calico-system" Pod="calico-kube-controllers-567786b6b9-gh9kf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:31.795453 containerd[1245]: 2024-06-25 14:37:31.777 [INFO][3908] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" Namespace="calico-system" Pod="calico-kube-controllers-567786b6b9-gh9kf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:31.795453 containerd[1245]: 2024-06-25 14:37:31.782 [INFO][3908] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" Namespace="calico-system" Pod="calico-kube-controllers-567786b6b9-gh9kf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0", GenerateName:"calico-kube-controllers-567786b6b9-", Namespace:"calico-system", SelfLink:"", UID:"1ad4c167-f4c1-437b-b169-12ec098e308e", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567786b6b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300", Pod:"calico-kube-controllers-567786b6b9-gh9kf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliee9850568fc", MAC:"2a:fa:c3:fa:fd:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:31.795453 containerd[1245]: 2024-06-25 14:37:31.792 [INFO][3908] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300" Namespace="calico-system" Pod="calico-kube-controllers-567786b6b9-gh9kf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:31.807000 audit[3944]: NETFILTER_CFG table=filter:106 family=2 entries=38 op=nft_register_chain pid=3944 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:37:31.807000 audit[3944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20336 a0=3 a1=fffffa30a850 a2=0 a3=ffffb9936fa8 items=0 ppid=3554 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:31.807000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:37:31.822848 containerd[1245]: time="2024-06-25T14:37:31.822407816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:37:31.822848 containerd[1245]: time="2024-06-25T14:37:31.822816016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:31.823071 containerd[1245]: time="2024-06-25T14:37:31.822835456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:37:31.823071 containerd[1245]: time="2024-06-25T14:37:31.822852136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:31.837161 systemd[1]: Started cri-containerd-ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300.scope - libcontainer container ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300. Jun 25 14:37:31.849000 audit: BPF prog-id=150 op=LOAD Jun 25 14:37:31.849000 audit: BPF prog-id=151 op=LOAD Jun 25 14:37:31.849000 audit[3962]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3953 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:31.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165326535366636343663633931303134353966633034653733303138 Jun 25 14:37:31.849000 audit: BPF prog-id=152 op=LOAD Jun 25 14:37:31.849000 audit[3962]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3953 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:31.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165326535366636343663633931303134353966633034653733303138 Jun 25 14:37:31.849000 audit: BPF prog-id=152 op=UNLOAD Jun 25 14:37:31.849000 audit: BPF prog-id=151 op=UNLOAD Jun 25 14:37:31.849000 audit: BPF prog-id=153 op=LOAD Jun 25 14:37:31.849000 audit[3962]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3953 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:31.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165326535366636343663633931303134353966633034653733303138 Jun 25 14:37:31.851122 systemd-resolved[1184]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:37:31.871578 containerd[1245]: time="2024-06-25T14:37:31.871535036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-567786b6b9-gh9kf,Uid:1ad4c167-f4c1-437b-b169-12ec098e308e,Namespace:calico-system,Attempt:1,} returns sandbox id \"ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300\"" Jun 25 14:37:31.875177 containerd[1245]: time="2024-06-25T14:37:31.873727957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 14:37:32.028360 systemd-networkd[1078]: calief5adfbae88: Gained IPv6LL Jun 25 14:37:32.685198 kubelet[2249]: E0625 14:37:32.682186 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:33.378036 containerd[1245]: time="2024-06-25T14:37:33.377992929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:33.379143 containerd[1245]: time="2024-06-25T14:37:33.379096369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jun 25 14:37:33.380222 containerd[1245]: time="2024-06-25T14:37:33.380196409Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:33.384073 containerd[1245]: time="2024-06-25T14:37:33.384041971Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:33.386348 containerd[1245]: time="2024-06-25T14:37:33.386319932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:33.387317 containerd[1245]: time="2024-06-25T14:37:33.387287892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.513500455s" Jun 25 14:37:33.387445 containerd[1245]: time="2024-06-25T14:37:33.387416892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jun 25 14:37:33.397840 containerd[1245]: time="2024-06-25T14:37:33.397802816Z" level=info msg="CreateContainer within sandbox \"ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 14:37:33.409754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount516925358.mount: Deactivated successfully. Jun 25 14:37:33.411451 containerd[1245]: time="2024-06-25T14:37:33.411411741Z" level=info msg="CreateContainer within sandbox \"ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ecab0c0164837929254fb06b943b270cf2137f3957c11eb6b6d8ec81b464412f\"" Jun 25 14:37:33.413827 containerd[1245]: time="2024-06-25T14:37:33.413741581Z" level=info msg="StartContainer for \"ecab0c0164837929254fb06b943b270cf2137f3957c11eb6b6d8ec81b464412f\"" Jun 25 14:37:33.440179 systemd[1]: Started cri-containerd-ecab0c0164837929254fb06b943b270cf2137f3957c11eb6b6d8ec81b464412f.scope - libcontainer container ecab0c0164837929254fb06b943b270cf2137f3957c11eb6b6d8ec81b464412f. Jun 25 14:37:33.449000 audit: BPF prog-id=154 op=LOAD Jun 25 14:37:33.449000 audit: BPF prog-id=155 op=LOAD Jun 25 14:37:33.449000 audit[4003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3953 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:33.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563616230633031363438333739323932353466623036623934336232 Jun 25 14:37:33.449000 audit: BPF prog-id=156 op=LOAD Jun 25 14:37:33.449000 audit[4003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3953 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:33.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563616230633031363438333739323932353466623036623934336232 Jun 25 14:37:33.449000 audit: BPF prog-id=156 op=UNLOAD Jun 25 14:37:33.449000 audit: BPF prog-id=155 op=UNLOAD Jun 25 14:37:33.449000 audit: BPF prog-id=157 op=LOAD Jun 25 14:37:33.449000 audit[4003]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3953 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:33.449000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563616230633031363438333739323932353466623036623934336232 Jun 25 14:37:33.469197 containerd[1245]: time="2024-06-25T14:37:33.469136441Z" level=info msg="StartContainer for \"ecab0c0164837929254fb06b943b270cf2137f3957c11eb6b6d8ec81b464412f\" returns successfully" Jun 25 14:37:33.500391 systemd-networkd[1078]: caliee9850568fc: Gained IPv6LL Jun 25 14:37:33.531029 containerd[1245]: time="2024-06-25T14:37:33.528332303Z" level=info msg="StopPodSandbox for \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\"" Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.603 [INFO][4048] k8s.go 608: Cleaning up netns ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.603 [INFO][4048] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" iface="eth0" netns="/var/run/netns/cni-81789c1c-4f2d-0189-07eb-a9db9ac94073" Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.603 [INFO][4048] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" iface="eth0" netns="/var/run/netns/cni-81789c1c-4f2d-0189-07eb-a9db9ac94073" Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.603 [INFO][4048] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" iface="eth0" netns="/var/run/netns/cni-81789c1c-4f2d-0189-07eb-a9db9ac94073" Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.603 [INFO][4048] k8s.go 615: Releasing IP address(es) ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.603 [INFO][4048] utils.go 188: Calico CNI releasing IP address ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.625 [INFO][4056] ipam_plugin.go 411: Releasing address using handleID ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" HandleID="k8s-pod-network.332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Workload="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.625 [INFO][4056] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.625 [INFO][4056] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.634 [WARNING][4056] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" HandleID="k8s-pod-network.332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Workload="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.634 [INFO][4056] ipam_plugin.go 439: Releasing address using workloadID ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" HandleID="k8s-pod-network.332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Workload="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.636 [INFO][4056] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:33.644632 containerd[1245]: 2024-06-25 14:37:33.641 [INFO][4048] k8s.go 621: Teardown processing complete. ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:33.645268 containerd[1245]: time="2024-06-25T14:37:33.645225825Z" level=info msg="TearDown network for sandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\" successfully" Jun 25 14:37:33.645354 containerd[1245]: time="2024-06-25T14:37:33.645336505Z" level=info msg="StopPodSandbox for \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\" returns successfully" Jun 25 14:37:33.645738 kubelet[2249]: E0625 14:37:33.645713 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:33.646757 containerd[1245]: time="2024-06-25T14:37:33.646104705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rn8b9,Uid:77c6372f-63bb-45d5-91a8-a2813fbef04f,Namespace:kube-system,Attempt:1,}" Jun 25 14:37:33.697315 kubelet[2249]: I0625 14:37:33.697250 2249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-567786b6b9-gh9kf" podStartSLOduration=25.182287708 podStartE2EDuration="26.697234963s" podCreationTimestamp="2024-06-25 14:37:07 +0000 UTC" firstStartedPulling="2024-06-25 14:37:31.873345717 +0000 UTC m=+45.438532944" lastFinishedPulling="2024-06-25 14:37:33.388292972 +0000 UTC m=+46.953480199" observedRunningTime="2024-06-25 14:37:33.695398403 +0000 UTC m=+47.260585630" watchObservedRunningTime="2024-06-25 14:37:33.697234963 +0000 UTC m=+47.262422190" Jun 25 14:37:33.851851 systemd-networkd[1078]: cali04415f7eb1b: Link UP Jun 25 14:37:33.853817 systemd-networkd[1078]: cali04415f7eb1b: Gained carrier Jun 25 14:37:33.854257 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:37:33.854342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali04415f7eb1b: link becomes ready Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.763 [INFO][4073] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0 coredns-7db6d8ff4d- kube-system 77c6372f-63bb-45d5-91a8-a2813fbef04f 849 0 2024-06-25 14:37:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-rn8b9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04415f7eb1b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rn8b9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rn8b9-" Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.763 [INFO][4073] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rn8b9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.805 [INFO][4099] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" HandleID="k8s-pod-network.b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" Workload="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.815 [INFO][4099] ipam_plugin.go 264: Auto assigning IP ContainerID="b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" HandleID="k8s-pod-network.b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" Workload="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000123cd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-rn8b9", "timestamp":"2024-06-25 14:37:33.805328802 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.815 [INFO][4099] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.816 [INFO][4099] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.816 [INFO][4099] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.818 [INFO][4099] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" host="localhost" Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.822 [INFO][4099] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.829 [INFO][4099] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.831 [INFO][4099] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.837 [INFO][4099] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.837 [INFO][4099] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" host="localhost" Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.838 [INFO][4099] ipam.go 1685: Creating new handle: k8s-pod-network.b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694 Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.843 [INFO][4099] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" host="localhost" Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.847 [INFO][4099] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" host="localhost" Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.847 [INFO][4099] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" host="localhost" Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.847 [INFO][4099] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:33.869164 containerd[1245]: 2024-06-25 14:37:33.847 [INFO][4099] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" HandleID="k8s-pod-network.b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" Workload="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:33.869960 containerd[1245]: 2024-06-25 14:37:33.849 [INFO][4073] k8s.go 386: Populated endpoint ContainerID="b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rn8b9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"77c6372f-63bb-45d5-91a8-a2813fbef04f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-rn8b9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04415f7eb1b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:33.869960 containerd[1245]: 2024-06-25 14:37:33.849 [INFO][4073] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rn8b9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:33.869960 containerd[1245]: 2024-06-25 14:37:33.849 [INFO][4073] dataplane_linux.go 68: Setting the host side veth name to cali04415f7eb1b ContainerID="b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rn8b9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:33.869960 containerd[1245]: 2024-06-25 14:37:33.854 [INFO][4073] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rn8b9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:33.869960 containerd[1245]: 2024-06-25 14:37:33.857 [INFO][4073] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rn8b9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"77c6372f-63bb-45d5-91a8-a2813fbef04f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694", Pod:"coredns-7db6d8ff4d-rn8b9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04415f7eb1b", MAC:"86:15:7a:f5:0b:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:33.869960 containerd[1245]: 2024-06-25 14:37:33.866 [INFO][4073] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rn8b9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:33.881000 audit[4120]: NETFILTER_CFG table=filter:107 family=2 entries=40 op=nft_register_chain pid=4120 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:37:33.881000 audit[4120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21072 a0=3 a1=fffff17b1a80 a2=0 a3=ffffae3e7fa8 items=0 ppid=3554 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:33.881000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:37:33.901828 containerd[1245]: time="2024-06-25T14:37:33.901658957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:37:33.901828 containerd[1245]: time="2024-06-25T14:37:33.901781277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:33.902000 containerd[1245]: time="2024-06-25T14:37:33.901818397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:37:33.902000 containerd[1245]: time="2024-06-25T14:37:33.901857437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:33.926191 systemd[1]: Started cri-containerd-b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694.scope - libcontainer container b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694. Jun 25 14:37:33.939000 audit: BPF prog-id=158 op=LOAD Jun 25 14:37:33.940000 audit: BPF prog-id=159 op=LOAD Jun 25 14:37:33.940000 audit[4140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=4130 pid=4140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:33.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234616436643765393133643963346665633037666161353734353838 Jun 25 14:37:33.940000 audit: BPF prog-id=160 op=LOAD Jun 25 14:37:33.940000 audit[4140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=4130 pid=4140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:33.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234616436643765393133643963346665633037666161353734353838 Jun 25 14:37:33.940000 audit: BPF prog-id=160 op=UNLOAD Jun 25 14:37:33.940000 audit: BPF prog-id=159 op=UNLOAD Jun 25 14:37:33.940000 audit: BPF prog-id=161 op=LOAD Jun 25 14:37:33.940000 audit[4140]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=4130 pid=4140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:33.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234616436643765393133643963346665633037666161353734353838 Jun 25 14:37:33.942020 systemd-resolved[1184]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:37:33.960462 containerd[1245]: time="2024-06-25T14:37:33.960419698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rn8b9,Uid:77c6372f-63bb-45d5-91a8-a2813fbef04f,Namespace:kube-system,Attempt:1,} returns sandbox id \"b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694\"" Jun 25 14:37:33.961235 kubelet[2249]: E0625 14:37:33.961205 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:33.964284 containerd[1245]: time="2024-06-25T14:37:33.964245420Z" level=info msg="CreateContainer within sandbox \"b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:37:33.980999 containerd[1245]: time="2024-06-25T14:37:33.980937186Z" level=info msg="CreateContainer within sandbox \"b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d176819e2adf43d6e5a53e0a4c75dbabe14784f2edd967e5ff7adbb0a8f36f7\"" Jun 25 14:37:33.982136 containerd[1245]: time="2024-06-25T14:37:33.982095706Z" level=info msg="StartContainer for \"8d176819e2adf43d6e5a53e0a4c75dbabe14784f2edd967e5ff7adbb0a8f36f7\"" Jun 25 14:37:34.006139 systemd[1]: Started cri-containerd-8d176819e2adf43d6e5a53e0a4c75dbabe14784f2edd967e5ff7adbb0a8f36f7.scope - libcontainer container 8d176819e2adf43d6e5a53e0a4c75dbabe14784f2edd967e5ff7adbb0a8f36f7. Jun 25 14:37:34.013000 audit: BPF prog-id=162 op=LOAD Jun 25 14:37:34.014000 audit: BPF prog-id=163 op=LOAD Jun 25 14:37:34.014000 audit[4171]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=4130 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:34.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864313736383139653261646634336436653561353365306134633735 Jun 25 14:37:34.014000 audit: BPF prog-id=164 op=LOAD Jun 25 14:37:34.014000 audit[4171]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=4130 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:34.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864313736383139653261646634336436653561353365306134633735 Jun 25 14:37:34.014000 audit: BPF prog-id=164 op=UNLOAD Jun 25 14:37:34.014000 audit: BPF prog-id=163 op=UNLOAD Jun 25 14:37:34.014000 audit: BPF prog-id=165 op=LOAD Jun 25 14:37:34.014000 audit[4171]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=4130 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:34.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864313736383139653261646634336436653561353365306134633735 Jun 25 14:37:34.028307 containerd[1245]: time="2024-06-25T14:37:34.028264682Z" level=info msg="StartContainer for \"8d176819e2adf43d6e5a53e0a4c75dbabe14784f2edd967e5ff7adbb0a8f36f7\" returns successfully" Jun 25 14:37:34.395238 systemd[1]: run-netns-cni\x2d81789c1c\x2d4f2d\x2d0189\x2d07eb\x2da9db9ac94073.mount: Deactivated successfully. Jun 25 14:37:34.528379 containerd[1245]: time="2024-06-25T14:37:34.528205851Z" level=info msg="StopPodSandbox for \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\"" Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.570 [INFO][4219] k8s.go 608: Cleaning up netns ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.570 [INFO][4219] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" iface="eth0" netns="/var/run/netns/cni-999383ee-aa8e-2649-1054-c14dcd01f4d2" Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.570 [INFO][4219] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" iface="eth0" netns="/var/run/netns/cni-999383ee-aa8e-2649-1054-c14dcd01f4d2" Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.571 [INFO][4219] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" iface="eth0" netns="/var/run/netns/cni-999383ee-aa8e-2649-1054-c14dcd01f4d2" Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.571 [INFO][4219] k8s.go 615: Releasing IP address(es) ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.571 [INFO][4219] utils.go 188: Calico CNI releasing IP address ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.595 [INFO][4226] ipam_plugin.go 411: Releasing address using handleID ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" HandleID="k8s-pod-network.fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Workload="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.595 [INFO][4226] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.595 [INFO][4226] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.606 [WARNING][4226] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" HandleID="k8s-pod-network.fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Workload="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.606 [INFO][4226] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" HandleID="k8s-pod-network.fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Workload="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.607 [INFO][4226] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:34.609785 containerd[1245]: 2024-06-25 14:37:34.608 [INFO][4219] k8s.go 621: Teardown processing complete. ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:34.610274 containerd[1245]: time="2024-06-25T14:37:34.609931038Z" level=info msg="TearDown network for sandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\" successfully" Jun 25 14:37:34.610274 containerd[1245]: time="2024-06-25T14:37:34.609963558Z" level=info msg="StopPodSandbox for \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\" returns successfully" Jun 25 14:37:34.612099 systemd[1]: run-netns-cni\x2d999383ee\x2daa8e\x2d2649\x2d1054\x2dc14dcd01f4d2.mount: Deactivated successfully. Jun 25 14:37:34.612922 containerd[1245]: time="2024-06-25T14:37:34.612878879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfl4t,Uid:d97e3989-35c8-44ea-83c9-925e939d51bb,Namespace:calico-system,Attempt:1,}" Jun 25 14:37:34.690166 kubelet[2249]: E0625 14:37:34.690043 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:34.708872 kubelet[2249]: I0625 14:37:34.708502 2249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rn8b9" podStartSLOduration=33.708483831 podStartE2EDuration="33.708483831s" podCreationTimestamp="2024-06-25 14:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:37:34.708294831 +0000 UTC m=+48.273482058" watchObservedRunningTime="2024-06-25 14:37:34.708483831 +0000 UTC m=+48.273671058" Jun 25 14:37:34.720000 audit[4263]: NETFILTER_CFG table=filter:108 family=2 entries=8 op=nft_register_rule pid=4263 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:34.722580 kernel: kauditd_printk_skb: 116 callbacks suppressed Jun 25 14:37:34.722739 kernel: audit: type=1325 audit(1719326254.720:625): table=filter:108 family=2 entries=8 op=nft_register_rule pid=4263 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:34.720000 audit[4263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc8ed9450 a2=0 a3=1 items=0 ppid=2410 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:34.727027 kernel: audit: type=1300 audit(1719326254.720:625): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc8ed9450 a2=0 a3=1 items=0 ppid=2410 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:34.727089 kernel: audit: type=1327 audit(1719326254.720:625): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:34.720000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:34.721000 audit[4263]: NETFILTER_CFG table=nat:109 family=2 entries=44 op=nft_register_rule pid=4263 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:34.721000 audit[4263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc8ed9450 a2=0 a3=1 items=0 ppid=2410 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:34.735667 kernel: audit: type=1325 audit(1719326254.721:626): table=nat:109 family=2 entries=44 op=nft_register_rule pid=4263 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:34.735780 kernel: audit: type=1300 audit(1719326254.721:626): arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc8ed9450 a2=0 a3=1 items=0 ppid=2410 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:34.735810 kernel: audit: type=1327 audit(1719326254.721:626): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:34.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:34.764670 systemd-networkd[1078]: cali3e2b024aeed: Link UP Jun 25 14:37:34.766273 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3e2b024aeed: link becomes ready Jun 25 14:37:34.766032 systemd-networkd[1078]: cali3e2b024aeed: Gained carrier Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.662 [INFO][4241] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kfl4t-eth0 csi-node-driver- calico-system d97e3989-35c8-44ea-83c9-925e939d51bb 871 0 2024-06-25 14:37:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-kfl4t eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali3e2b024aeed [] []}} ContainerID="abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" Namespace="calico-system" Pod="csi-node-driver-kfl4t" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfl4t-" Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.662 [INFO][4241] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" Namespace="calico-system" Pod="csi-node-driver-kfl4t" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.687 [INFO][4254] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" HandleID="k8s-pod-network.abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" Workload="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.717 [INFO][4254] ipam_plugin.go 264: Auto assigning IP ContainerID="abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" HandleID="k8s-pod-network.abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" Workload="localhost-k8s-csi--node--driver--kfl4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027bdf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kfl4t", "timestamp":"2024-06-25 14:37:34.687017464 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.717 [INFO][4254] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.717 [INFO][4254] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.717 [INFO][4254] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.720 [INFO][4254] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" host="localhost" Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.732 [INFO][4254] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.738 [INFO][4254] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.742 [INFO][4254] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.745 [INFO][4254] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.745 [INFO][4254] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" host="localhost" Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.748 [INFO][4254] ipam.go 1685: Creating new handle: k8s-pod-network.abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2 Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.753 [INFO][4254] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" host="localhost" Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.760 [INFO][4254] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" host="localhost" Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.760 [INFO][4254] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" host="localhost" Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.760 [INFO][4254] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:34.784076 containerd[1245]: 2024-06-25 14:37:34.760 [INFO][4254] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" HandleID="k8s-pod-network.abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" Workload="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:34.784738 containerd[1245]: 2024-06-25 14:37:34.762 [INFO][4241] k8s.go 386: Populated endpoint ContainerID="abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" Namespace="calico-system" Pod="csi-node-driver-kfl4t" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfl4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kfl4t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d97e3989-35c8-44ea-83c9-925e939d51bb", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kfl4t", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3e2b024aeed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:34.784738 containerd[1245]: 2024-06-25 14:37:34.762 [INFO][4241] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" Namespace="calico-system" Pod="csi-node-driver-kfl4t" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:34.784738 containerd[1245]: 2024-06-25 14:37:34.762 [INFO][4241] dataplane_linux.go 68: Setting the host side veth name to cali3e2b024aeed ContainerID="abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" Namespace="calico-system" Pod="csi-node-driver-kfl4t" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:34.784738 containerd[1245]: 2024-06-25 14:37:34.764 [INFO][4241] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" Namespace="calico-system" Pod="csi-node-driver-kfl4t" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:34.784738 containerd[1245]: 2024-06-25 14:37:34.766 [INFO][4241] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" Namespace="calico-system" Pod="csi-node-driver-kfl4t" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfl4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kfl4t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d97e3989-35c8-44ea-83c9-925e939d51bb", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2", Pod:"csi-node-driver-kfl4t", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3e2b024aeed", MAC:"ce:85:e9:b8:4f:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:34.784738 containerd[1245]: 2024-06-25 14:37:34.780 [INFO][4241] k8s.go 500: Wrote updated endpoint to datastore ContainerID="abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2" Namespace="calico-system" Pod="csi-node-driver-kfl4t" WorkloadEndpoint="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:34.788000 audit[4278]: NETFILTER_CFG table=filter:110 family=2 entries=38 op=nft_register_chain pid=4278 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:37:34.788000 audit[4278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19812 a0=3 a1=ffffedd6b5c0 a2=0 a3=ffff8ae50fa8 items=0 ppid=3554 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:34.793960 kernel: audit: type=1325 audit(1719326254.788:627): table=filter:110 family=2 entries=38 op=nft_register_chain pid=4278 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:37:34.794059 kernel: audit: type=1300 audit(1719326254.788:627): arch=c00000b7 syscall=211 success=yes exit=19812 a0=3 a1=ffffedd6b5c0 a2=0 a3=ffff8ae50fa8 items=0 ppid=3554 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:34.794081 kernel: audit: type=1327 audit(1719326254.788:627): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:37:34.788000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:37:34.800410 containerd[1245]: time="2024-06-25T14:37:34.800218062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:37:34.800410 containerd[1245]: time="2024-06-25T14:37:34.800283582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:34.800410 containerd[1245]: time="2024-06-25T14:37:34.800303582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:37:34.800410 containerd[1245]: time="2024-06-25T14:37:34.800317622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:34.818208 systemd[1]: Started cri-containerd-abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2.scope - libcontainer container abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2. Jun 25 14:37:34.825000 audit: BPF prog-id=166 op=LOAD Jun 25 14:37:34.825000 audit: BPF prog-id=167 op=LOAD Jun 25 14:37:34.825000 audit[4297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=4287 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:34.827088 kernel: audit: type=1334 audit(1719326254.825:628): prog-id=166 op=LOAD Jun 25 14:37:34.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162663333623331643933623434633535313631633434373530653862 Jun 25 14:37:34.825000 audit: BPF prog-id=168 op=LOAD Jun 25 14:37:34.825000 audit[4297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=4287 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:34.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162663333623331643933623434633535313631633434373530653862 Jun 25 14:37:34.825000 audit: BPF prog-id=168 op=UNLOAD Jun 25 14:37:34.826000 audit: BPF prog-id=167 op=UNLOAD Jun 25 14:37:34.826000 audit: BPF prog-id=169 op=LOAD Jun 25 14:37:34.826000 audit[4297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=4287 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:34.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162663333623331643933623434633535313631633434373530653862 Jun 25 14:37:34.827599 systemd-resolved[1184]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:37:34.837039 containerd[1245]: time="2024-06-25T14:37:34.836996115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kfl4t,Uid:d97e3989-35c8-44ea-83c9-925e939d51bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2\"" Jun 25 14:37:34.838895 containerd[1245]: time="2024-06-25T14:37:34.838861915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 14:37:35.355165 systemd-networkd[1078]: cali04415f7eb1b: Gained IPv6LL Jun 25 14:37:35.430875 systemd[1]: Started sshd@12-10.0.0.122:22-10.0.0.1:59936.service - OpenSSH per-connection server daemon (10.0.0.1:59936). Jun 25 14:37:35.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.122:22-10.0.0.1:59936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:35.466000 audit[4326]: USER_ACCT pid=4326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:35.468149 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 59936 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:35.468000 audit[4326]: CRED_ACQ pid=4326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:35.468000 audit[4326]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe500f1f0 a2=3 a3=1 items=0 ppid=1 pid=4326 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:35.468000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:35.470732 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:35.475907 systemd-logind[1235]: New session 13 of user core. Jun 25 14:37:35.484226 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 14:37:35.487000 audit[4326]: USER_START pid=4326 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:35.489000 audit[4328]: CRED_ACQ pid=4328 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:35.691598 kubelet[2249]: E0625 14:37:35.691493 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:35.715323 sshd[4326]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:35.715000 audit[4326]: USER_END pid=4326 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:35.715000 audit[4326]: CRED_DISP pid=4326 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:35.722000 audit[4343]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=4343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:35.722000 audit[4343]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffdcb30690 a2=0 a3=1 items=0 ppid=2410 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:35.722000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:35.725767 systemd[1]: sshd@12-10.0.0.122:22-10.0.0.1:59936.service: Deactivated successfully. Jun 25 14:37:35.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.122:22-10.0.0.1:59936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:35.726442 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 14:37:35.727018 systemd-logind[1235]: Session 13 logged out. Waiting for processes to exit. Jun 25 14:37:35.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.122:22-10.0.0.1:59938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:35.728813 systemd[1]: Started sshd@13-10.0.0.122:22-10.0.0.1:59938.service - OpenSSH per-connection server daemon (10.0.0.1:59938). Jun 25 14:37:35.729808 systemd-logind[1235]: Removed session 13. Jun 25 14:37:35.737000 audit[4343]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:35.737000 audit[4343]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffdcb30690 a2=0 a3=1 items=0 ppid=2410 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:35.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:35.761000 audit[4345]: USER_ACCT pid=4345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:35.762491 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 59938 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:35.762000 audit[4345]: CRED_ACQ pid=4345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:35.762000 audit[4345]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe9971e80 a2=3 a3=1 items=0 ppid=1 pid=4345 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:35.762000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:35.763830 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:35.767647 systemd-logind[1235]: New session 14 of user core. Jun 25 14:37:35.776176 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 14:37:35.779000 audit[4345]: USER_START pid=4345 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:35.781000 audit[4352]: CRED_ACQ pid=4352 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:35.820956 containerd[1245]: time="2024-06-25T14:37:35.820902389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:35.821466 containerd[1245]: time="2024-06-25T14:37:35.821424509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jun 25 14:37:35.822319 containerd[1245]: time="2024-06-25T14:37:35.822285910Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:35.823628 containerd[1245]: time="2024-06-25T14:37:35.823600950Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:35.825091 containerd[1245]: time="2024-06-25T14:37:35.825048631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:35.826147 containerd[1245]: time="2024-06-25T14:37:35.826107191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 987.203676ms" Jun 25 14:37:35.826267 containerd[1245]: time="2024-06-25T14:37:35.826245951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jun 25 14:37:35.829142 containerd[1245]: time="2024-06-25T14:37:35.828929752Z" level=info msg="CreateContainer within sandbox \"abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 14:37:35.878336 containerd[1245]: time="2024-06-25T14:37:35.878275087Z" level=info msg="CreateContainer within sandbox \"abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d424d19f2c2ff9aa6ac672ba92d7ffeb81eab70dd5f6b66b653f6743b04b1730\"" Jun 25 14:37:35.879052 containerd[1245]: time="2024-06-25T14:37:35.879022048Z" level=info msg="StartContainer for \"d424d19f2c2ff9aa6ac672ba92d7ffeb81eab70dd5f6b66b653f6743b04b1730\"" Jun 25 14:37:35.917225 systemd[1]: Started cri-containerd-d424d19f2c2ff9aa6ac672ba92d7ffeb81eab70dd5f6b66b653f6743b04b1730.scope - libcontainer container d424d19f2c2ff9aa6ac672ba92d7ffeb81eab70dd5f6b66b653f6743b04b1730. Jun 25 14:37:35.931000 audit: BPF prog-id=170 op=LOAD Jun 25 14:37:35.931000 audit[4367]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=4287 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:35.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434323464313966326332666639616136616336373262613932643766 Jun 25 14:37:35.931000 audit: BPF prog-id=171 op=LOAD Jun 25 14:37:35.931000 audit[4367]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=4287 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:35.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434323464313966326332666639616136616336373262613932643766 Jun 25 14:37:35.931000 audit: BPF prog-id=171 op=UNLOAD Jun 25 14:37:35.931000 audit: BPF prog-id=170 op=UNLOAD Jun 25 14:37:35.931000 audit: BPF prog-id=172 op=LOAD Jun 25 14:37:35.931000 audit[4367]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=4287 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:35.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434323464313966326332666639616136616336373262613932643766 Jun 25 14:37:35.953345 containerd[1245]: time="2024-06-25T14:37:35.953205191Z" level=info msg="StartContainer for \"d424d19f2c2ff9aa6ac672ba92d7ffeb81eab70dd5f6b66b653f6743b04b1730\" returns successfully" Jun 25 14:37:35.957447 containerd[1245]: time="2024-06-25T14:37:35.957405712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 14:37:36.089547 sshd[4345]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:36.090000 audit[4345]: USER_END pid=4345 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:36.090000 audit[4345]: CRED_DISP pid=4345 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:36.101651 systemd[1]: sshd@13-10.0.0.122:22-10.0.0.1:59938.service: Deactivated successfully. Jun 25 14:37:36.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.122:22-10.0.0.1:59938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:36.102365 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 14:37:36.103018 systemd-logind[1235]: Session 14 logged out. Waiting for processes to exit. Jun 25 14:37:36.104473 systemd[1]: Started sshd@14-10.0.0.122:22-10.0.0.1:59954.service - OpenSSH per-connection server daemon (10.0.0.1:59954). Jun 25 14:37:36.105332 systemd-logind[1235]: Removed session 14. Jun 25 14:37:36.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.122:22-10.0.0.1:59954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:36.145000 audit[4397]: USER_ACCT pid=4397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:36.146729 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 59954 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:36.147000 audit[4397]: CRED_ACQ pid=4397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:36.147000 audit[4397]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd358e210 a2=3 a3=1 items=0 ppid=1 pid=4397 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:36.147000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:36.148778 sshd[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:36.152432 systemd-logind[1235]: New session 15 of user core. Jun 25 14:37:36.159180 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 14:37:36.162000 audit[4397]: USER_START pid=4397 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:36.163000 audit[4399]: CRED_ACQ pid=4399 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:36.379153 systemd-networkd[1078]: cali3e2b024aeed: Gained IPv6LL Jun 25 14:37:36.393391 systemd[1]: run-containerd-runc-k8s.io-d424d19f2c2ff9aa6ac672ba92d7ffeb81eab70dd5f6b66b653f6743b04b1730-runc.PPVMwI.mount: Deactivated successfully. Jun 25 14:37:36.695563 kubelet[2249]: E0625 14:37:36.695402 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:37.053322 containerd[1245]: time="2024-06-25T14:37:37.053208637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:37.053854 containerd[1245]: time="2024-06-25T14:37:37.053772637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jun 25 14:37:37.055216 containerd[1245]: time="2024-06-25T14:37:37.055183718Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:37.062149 containerd[1245]: time="2024-06-25T14:37:37.061414919Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:37.063010 containerd[1245]: time="2024-06-25T14:37:37.062953720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:37.063851 containerd[1245]: time="2024-06-25T14:37:37.063804520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.106326208s" Jun 25 14:37:37.063926 containerd[1245]: time="2024-06-25T14:37:37.063850880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jun 25 14:37:37.065821 containerd[1245]: time="2024-06-25T14:37:37.065781081Z" level=info msg="CreateContainer within sandbox \"abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 14:37:37.080853 containerd[1245]: time="2024-06-25T14:37:37.080794725Z" level=info msg="CreateContainer within sandbox \"abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"828a7275d3a6a2eedaf10fea3f1a9a4af60c6cbade8ca22329281a783a2908bd\"" Jun 25 14:37:37.081626 containerd[1245]: time="2024-06-25T14:37:37.081567205Z" level=info msg="StartContainer for \"828a7275d3a6a2eedaf10fea3f1a9a4af60c6cbade8ca22329281a783a2908bd\"" Jun 25 14:37:37.140147 systemd[1]: Started cri-containerd-828a7275d3a6a2eedaf10fea3f1a9a4af60c6cbade8ca22329281a783a2908bd.scope - libcontainer container 828a7275d3a6a2eedaf10fea3f1a9a4af60c6cbade8ca22329281a783a2908bd. Jun 25 14:37:37.153000 audit: BPF prog-id=173 op=LOAD Jun 25 14:37:37.153000 audit[4424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=4287 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:37.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832386137323735643361366132656564616631306665613366316139 Jun 25 14:37:37.153000 audit: BPF prog-id=174 op=LOAD Jun 25 14:37:37.153000 audit[4424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=4287 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:37.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832386137323735643361366132656564616631306665613366316139 Jun 25 14:37:37.153000 audit: BPF prog-id=174 op=UNLOAD Jun 25 14:37:37.153000 audit: BPF prog-id=173 op=UNLOAD Jun 25 14:37:37.153000 audit: BPF prog-id=175 op=LOAD Jun 25 14:37:37.153000 audit[4424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=4287 pid=4424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:37.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832386137323735643361366132656564616631306665613366316139 Jun 25 14:37:37.169992 containerd[1245]: time="2024-06-25T14:37:37.169918070Z" level=info msg="StartContainer for \"828a7275d3a6a2eedaf10fea3f1a9a4af60c6cbade8ca22329281a783a2908bd\" returns successfully" Jun 25 14:37:37.559000 audit[4456]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=4456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:37.559000 audit[4456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=fffffaea9b20 a2=0 a3=1 items=0 ppid=2410 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:37.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:37.560000 audit[4456]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:37.560000 audit[4456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffaea9b20 a2=0 a3=1 items=0 ppid=2410 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:37.560000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:37.573000 audit[4458]: NETFILTER_CFG table=filter:115 family=2 entries=32 op=nft_register_rule pid=4458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:37.575492 sshd[4397]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:37.573000 audit[4458]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffd370aef0 a2=0 a3=1 items=0 ppid=2410 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:37.573000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:37.575000 audit[4397]: USER_END pid=4397 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:37.575000 audit[4397]: CRED_DISP pid=4397 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:37.575000 audit[4458]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:37.575000 audit[4458]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd370aef0 a2=0 a3=1 items=0 ppid=2410 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:37.575000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:37.584468 systemd[1]: sshd@14-10.0.0.122:22-10.0.0.1:59954.service: Deactivated successfully. Jun 25 14:37:37.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.122:22-10.0.0.1:59954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:37.585171 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 14:37:37.585796 systemd-logind[1235]: Session 15 logged out. Waiting for processes to exit. Jun 25 14:37:37.587348 systemd[1]: Started sshd@15-10.0.0.122:22-10.0.0.1:59956.service - OpenSSH per-connection server daemon (10.0.0.1:59956). Jun 25 14:37:37.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.122:22-10.0.0.1:59956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:37.589793 systemd-logind[1235]: Removed session 15. Jun 25 14:37:37.627000 audit[4461]: USER_ACCT pid=4461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:37.628271 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 59956 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:37.629000 audit[4461]: CRED_ACQ pid=4461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:37.629000 audit[4461]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4d90480 a2=3 a3=1 items=0 ppid=1 pid=4461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:37.629000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:37.630580 sshd[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:37.633911 kubelet[2249]: I0625 14:37:37.633871 2249 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 14:37:37.635092 systemd-logind[1235]: New session 16 of user core. Jun 25 14:37:37.639151 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 14:37:37.641000 audit[4461]: USER_START pid=4461 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:37.643000 audit[4463]: CRED_ACQ pid=4463 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:37.644880 kubelet[2249]: I0625 14:37:37.644857 2249 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 14:37:37.715968 kubelet[2249]: E0625 14:37:37.715938 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:38.010551 sshd[4461]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:38.012000 audit[4461]: USER_END pid=4461 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:38.012000 audit[4461]: CRED_DISP pid=4461 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:38.018556 systemd[1]: sshd@15-10.0.0.122:22-10.0.0.1:59956.service: Deactivated successfully. Jun 25 14:37:38.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.122:22-10.0.0.1:59956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:38.019300 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 14:37:38.019988 systemd-logind[1235]: Session 16 logged out. Waiting for processes to exit. Jun 25 14:37:38.029461 systemd[1]: Started sshd@16-10.0.0.122:22-10.0.0.1:59960.service - OpenSSH per-connection server daemon (10.0.0.1:59960). Jun 25 14:37:38.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.122:22-10.0.0.1:59960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:38.034055 systemd-logind[1235]: Removed session 16. Jun 25 14:37:38.060000 audit[4475]: USER_ACCT pid=4475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:38.061608 sshd[4475]: Accepted publickey for core from 10.0.0.1 port 59960 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:38.061000 audit[4475]: CRED_ACQ pid=4475 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:38.061000 audit[4475]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdde248a0 a2=3 a3=1 items=0 ppid=1 pid=4475 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:38.061000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:38.062961 sshd[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:38.069451 systemd-logind[1235]: New session 17 of user core. Jun 25 14:37:38.074214 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 14:37:38.078000 audit[4475]: USER_START pid=4475 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:38.079000 audit[4477]: CRED_ACQ pid=4477 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:38.234403 sshd[4475]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:38.234000 audit[4475]: USER_END pid=4475 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:38.234000 audit[4475]: CRED_DISP pid=4475 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:38.236991 systemd[1]: sshd@16-10.0.0.122:22-10.0.0.1:59960.service: Deactivated successfully. Jun 25 14:37:38.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.122:22-10.0.0.1:59960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:38.237757 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 14:37:38.238905 systemd-logind[1235]: Session 17 logged out. Waiting for processes to exit. Jun 25 14:37:38.239634 systemd-logind[1235]: Removed session 17. Jun 25 14:37:39.330668 systemd[1]: run-containerd-runc-k8s.io-a0061c2f9ee27a4f2d6ca9e863ad6c4eec15ef4cd7d6f38a3b5f55516d4474ec-runc.dxjkgP.mount: Deactivated successfully. Jun 25 14:37:39.391575 kubelet[2249]: E0625 14:37:39.391529 2249 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 14:37:39.405907 kubelet[2249]: I0625 14:37:39.405844 2249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kfl4t" podStartSLOduration=30.179925495 podStartE2EDuration="32.4058275s" podCreationTimestamp="2024-06-25 14:37:07 +0000 UTC" firstStartedPulling="2024-06-25 14:37:34.838648275 +0000 UTC m=+48.403835502" lastFinishedPulling="2024-06-25 14:37:37.06455032 +0000 UTC m=+50.629737507" observedRunningTime="2024-06-25 14:37:37.728643865 +0000 UTC m=+51.293831092" watchObservedRunningTime="2024-06-25 14:37:39.4058275 +0000 UTC m=+52.971014727" Jun 25 14:37:41.905000 audit[4513]: NETFILTER_CFG table=filter:117 family=2 entries=33 op=nft_register_rule pid=4513 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:41.907396 kernel: kauditd_printk_skb: 106 callbacks suppressed Jun 25 14:37:41.907578 kernel: audit: type=1325 audit(1719326261.905:695): table=filter:117 family=2 entries=33 op=nft_register_rule pid=4513 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:41.905000 audit[4513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12604 a0=3 a1=ffffc4fa39a0 a2=0 a3=1 items=0 ppid=2410 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:41.912656 kernel: audit: type=1300 audit(1719326261.905:695): arch=c00000b7 syscall=211 success=yes exit=12604 a0=3 a1=ffffc4fa39a0 a2=0 a3=1 items=0 ppid=2410 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:41.912731 kernel: audit: type=1327 audit(1719326261.905:695): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:41.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:41.906000 audit[4513]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=4513 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:41.906000 audit[4513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc4fa39a0 a2=0 a3=1 items=0 ppid=2410 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:41.922061 kubelet[2249]: I0625 14:37:41.922017 2249 topology_manager.go:215] "Topology Admit Handler" podUID="f688e211-866f-4999-ab9a-b10156be11a0" podNamespace="calico-apiserver" podName="calico-apiserver-96c76b-6v7rn" Jun 25 14:37:41.923941 kernel: audit: type=1325 audit(1719326261.906:696): table=nat:118 family=2 entries=20 op=nft_register_rule pid=4513 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:41.924029 kernel: audit: type=1300 audit(1719326261.906:696): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc4fa39a0 a2=0 a3=1 items=0 ppid=2410 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:41.906000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:41.925456 kernel: audit: type=1327 audit(1719326261.906:696): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:41.928460 systemd[1]: Created slice kubepods-besteffort-podf688e211_866f_4999_ab9a_b10156be11a0.slice - libcontainer container kubepods-besteffort-podf688e211_866f_4999_ab9a_b10156be11a0.slice. Jun 25 14:37:41.938000 audit[4515]: NETFILTER_CFG table=filter:119 family=2 entries=34 op=nft_register_rule pid=4515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:41.938000 audit[4515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12604 a0=3 a1=fffff51c62a0 a2=0 a3=1 items=0 ppid=2410 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:41.945196 kernel: audit: type=1325 audit(1719326261.938:697): table=filter:119 family=2 entries=34 op=nft_register_rule pid=4515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:41.945278 kernel: audit: type=1300 audit(1719326261.938:697): arch=c00000b7 syscall=211 success=yes exit=12604 a0=3 a1=fffff51c62a0 a2=0 a3=1 items=0 ppid=2410 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:41.945306 kernel: audit: type=1327 audit(1719326261.938:697): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:41.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:41.941000 audit[4515]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=4515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:41.941000 audit[4515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff51c62a0 a2=0 a3=1 items=0 ppid=2410 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:41.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:41.950989 kernel: audit: type=1325 audit(1719326261.941:698): table=nat:120 family=2 entries=20 op=nft_register_rule pid=4515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:42.024766 kubelet[2249]: I0625 14:37:42.024733 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tl8p\" (UniqueName: \"kubernetes.io/projected/f688e211-866f-4999-ab9a-b10156be11a0-kube-api-access-9tl8p\") pod \"calico-apiserver-96c76b-6v7rn\" (UID: \"f688e211-866f-4999-ab9a-b10156be11a0\") " pod="calico-apiserver/calico-apiserver-96c76b-6v7rn" Jun 25 14:37:42.024766 kubelet[2249]: I0625 14:37:42.024770 2249 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f688e211-866f-4999-ab9a-b10156be11a0-calico-apiserver-certs\") pod \"calico-apiserver-96c76b-6v7rn\" (UID: \"f688e211-866f-4999-ab9a-b10156be11a0\") " pod="calico-apiserver/calico-apiserver-96c76b-6v7rn" Jun 25 14:37:42.130259 kubelet[2249]: E0625 14:37:42.130204 2249 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 14:37:42.130417 kubelet[2249]: E0625 14:37:42.130308 2249 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f688e211-866f-4999-ab9a-b10156be11a0-calico-apiserver-certs podName:f688e211-866f-4999-ab9a-b10156be11a0 nodeName:}" failed. No retries permitted until 2024-06-25 14:37:42.630282275 +0000 UTC m=+56.195469462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/f688e211-866f-4999-ab9a-b10156be11a0-calico-apiserver-certs") pod "calico-apiserver-96c76b-6v7rn" (UID: "f688e211-866f-4999-ab9a-b10156be11a0") : secret "calico-apiserver-certs" not found Jun 25 14:37:42.731551 kubelet[2249]: E0625 14:37:42.731455 2249 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 14:37:42.731551 kubelet[2249]: E0625 14:37:42.731527 2249 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f688e211-866f-4999-ab9a-b10156be11a0-calico-apiserver-certs podName:f688e211-866f-4999-ab9a-b10156be11a0 nodeName:}" failed. No retries permitted until 2024-06-25 14:37:43.731511876 +0000 UTC m=+57.296699103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/f688e211-866f-4999-ab9a-b10156be11a0-calico-apiserver-certs") pod "calico-apiserver-96c76b-6v7rn" (UID: "f688e211-866f-4999-ab9a-b10156be11a0") : secret "calico-apiserver-certs" not found Jun 25 14:37:43.248520 systemd[1]: Started sshd@17-10.0.0.122:22-10.0.0.1:49338.service - OpenSSH per-connection server daemon (10.0.0.1:49338). Jun 25 14:37:43.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.122:22-10.0.0.1:49338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:43.286000 audit[4518]: USER_ACCT pid=4518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:43.287631 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 49338 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:43.287000 audit[4518]: CRED_ACQ pid=4518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:43.287000 audit[4518]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdd9b9c20 a2=3 a3=1 items=0 ppid=1 pid=4518 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:43.287000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:43.288993 sshd[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:43.299012 systemd-logind[1235]: New session 18 of user core. Jun 25 14:37:43.303200 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 14:37:43.306000 audit[4518]: USER_START pid=4518 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:43.308000 audit[4520]: CRED_ACQ pid=4520 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:43.536236 sshd[4518]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:43.536000 audit[4518]: USER_END pid=4518 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:43.536000 audit[4518]: CRED_DISP pid=4518 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:43.539023 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 14:37:43.539779 systemd-logind[1235]: Session 18 logged out. Waiting for processes to exit. Jun 25 14:37:43.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.122:22-10.0.0.1:49338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:43.539810 systemd[1]: sshd@17-10.0.0.122:22-10.0.0.1:49338.service: Deactivated successfully. Jun 25 14:37:43.540930 systemd-logind[1235]: Removed session 18. Jun 25 14:37:43.641000 audit[4531]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=4531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:43.641000 audit[4531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffe2d56570 a2=0 a3=1 items=0 ppid=2410 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:43.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:43.643000 audit[4531]: NETFILTER_CFG table=nat:122 family=2 entries=104 op=nft_register_chain pid=4531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:43.643000 audit[4531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe2d56570 a2=0 a3=1 items=0 ppid=2410 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:43.643000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:44.031510 containerd[1245]: time="2024-06-25T14:37:44.031454244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c76b-6v7rn,Uid:f688e211-866f-4999-ab9a-b10156be11a0,Namespace:calico-apiserver,Attempt:0,}" Jun 25 14:37:44.159997 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:37:44.160303 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia238e765d18: link becomes ready Jun 25 14:37:44.161207 systemd-networkd[1078]: calia238e765d18: Link UP Jun 25 14:37:44.161375 systemd-networkd[1078]: calia238e765d18: Gained carrier Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.074 [INFO][4535] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0 calico-apiserver-96c76b- calico-apiserver f688e211-866f-4999-ab9a-b10156be11a0 1002 0 2024-06-25 14:37:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:96c76b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-96c76b-6v7rn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia238e765d18 [] []}} ContainerID="6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" Namespace="calico-apiserver" Pod="calico-apiserver-96c76b-6v7rn" WorkloadEndpoint="localhost-k8s-calico--apiserver--96c76b--6v7rn-" Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.074 [INFO][4535] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" Namespace="calico-apiserver" Pod="calico-apiserver-96c76b-6v7rn" WorkloadEndpoint="localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0" Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.114 [INFO][4549] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" HandleID="k8s-pod-network.6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" Workload="localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0" Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.125 [INFO][4549] ipam_plugin.go 264: Auto assigning IP ContainerID="6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" HandleID="k8s-pod-network.6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" Workload="localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000390b50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-96c76b-6v7rn", "timestamp":"2024-06-25 14:37:44.114717899 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.125 [INFO][4549] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.125 [INFO][4549] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.125 [INFO][4549] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.127 [INFO][4549] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" host="localhost" Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.131 [INFO][4549] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.135 [INFO][4549] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.137 [INFO][4549] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.139 [INFO][4549] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.139 [INFO][4549] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" host="localhost" Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.141 [INFO][4549] ipam.go 1685: Creating new handle: k8s-pod-network.6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4 Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.144 [INFO][4549] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" host="localhost" Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.150 [INFO][4549] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" host="localhost" Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.150 [INFO][4549] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" host="localhost" Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.150 [INFO][4549] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:44.174969 containerd[1245]: 2024-06-25 14:37:44.150 [INFO][4549] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" HandleID="k8s-pod-network.6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" Workload="localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0" Jun 25 14:37:44.175650 containerd[1245]: 2024-06-25 14:37:44.152 [INFO][4535] k8s.go 386: Populated endpoint ContainerID="6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" Namespace="calico-apiserver" Pod="calico-apiserver-96c76b-6v7rn" WorkloadEndpoint="localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0", GenerateName:"calico-apiserver-96c76b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f688e211-866f-4999-ab9a-b10156be11a0", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96c76b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-96c76b-6v7rn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia238e765d18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:44.175650 containerd[1245]: 2024-06-25 14:37:44.152 [INFO][4535] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" Namespace="calico-apiserver" Pod="calico-apiserver-96c76b-6v7rn" WorkloadEndpoint="localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0" Jun 25 14:37:44.175650 containerd[1245]: 2024-06-25 14:37:44.152 [INFO][4535] dataplane_linux.go 68: Setting the host side veth name to calia238e765d18 ContainerID="6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" Namespace="calico-apiserver" Pod="calico-apiserver-96c76b-6v7rn" WorkloadEndpoint="localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0" Jun 25 14:37:44.175650 containerd[1245]: 2024-06-25 14:37:44.158 [INFO][4535] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" Namespace="calico-apiserver" Pod="calico-apiserver-96c76b-6v7rn" WorkloadEndpoint="localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0" Jun 25 14:37:44.175650 containerd[1245]: 2024-06-25 14:37:44.161 [INFO][4535] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" Namespace="calico-apiserver" Pod="calico-apiserver-96c76b-6v7rn" WorkloadEndpoint="localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0", GenerateName:"calico-apiserver-96c76b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f688e211-866f-4999-ab9a-b10156be11a0", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96c76b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4", Pod:"calico-apiserver-96c76b-6v7rn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia238e765d18", MAC:"22:09:0a:c2:b1:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:44.175650 containerd[1245]: 2024-06-25 14:37:44.172 [INFO][4535] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4" Namespace="calico-apiserver" Pod="calico-apiserver-96c76b-6v7rn" WorkloadEndpoint="localhost-k8s-calico--apiserver--96c76b--6v7rn-eth0" Jun 25 14:37:44.189000 audit[4577]: NETFILTER_CFG table=filter:123 family=2 entries=51 op=nft_register_chain pid=4577 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:37:44.189000 audit[4577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26260 a0=3 a1=ffffcd9afc50 a2=0 a3=ffffa43b3fa8 items=0 ppid=3554 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:44.189000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:37:44.194016 containerd[1245]: time="2024-06-25T14:37:44.193920753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:37:44.194016 containerd[1245]: time="2024-06-25T14:37:44.193989473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:44.194334 containerd[1245]: time="2024-06-25T14:37:44.194003753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:37:44.194334 containerd[1245]: time="2024-06-25T14:37:44.194013313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:37:44.216161 systemd[1]: Started cri-containerd-6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4.scope - libcontainer container 6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4. Jun 25 14:37:44.222000 audit[2138]: AVC avc: denied { watch } for pid=2138 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c683,c878 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:44.222000 audit[2138]: AVC avc: denied { watch } for pid=2138 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7757 scontext=system_u:system_r:container_t:s0:c683,c878 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:44.222000 audit[2138]: AVC avc: denied { watch } for pid=2138 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c683,c878 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:44.222000 audit[2138]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=76 a1=400f29cc00 a2=fc6 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c683,c878 key=(null) Jun 25 14:37:44.222000 audit[2138]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=74 a1=400eb2be60 a2=fc6 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c683,c878 key=(null) Jun 25 14:37:44.222000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313232002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 14:37:44.222000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313232002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 14:37:44.222000 audit[2138]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=75 a1=4006c71920 a2=fc6 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c683,c878 key=(null) Jun 25 14:37:44.222000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313232002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 14:37:44.222000 audit[2138]: AVC avc: denied { watch } for pid=2138 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c683,c878 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:44.222000 audit[2138]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=74 a1=4006fe4920 a2=fc6 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c683,c878 key=(null) Jun 25 14:37:44.222000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313232002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 14:37:44.222000 audit[2138]: AVC avc: denied { watch } for pid=2138 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c683,c878 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:44.222000 audit[2138]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=74 a1=400ef59470 a2=fc6 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c683,c878 key=(null) Jun 25 14:37:44.222000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313232002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 14:37:44.223000 audit[2138]: AVC avc: denied { watch } for pid=2138 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7757 scontext=system_u:system_r:container_t:s0:c683,c878 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:44.223000 audit[2138]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=77 a1=400e19cd50 a2=fc6 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c683,c878 key=(null) Jun 25 14:37:44.223000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313232002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 14:37:44.228000 audit: BPF prog-id=176 op=LOAD Jun 25 14:37:44.228000 audit: BPF prog-id=177 op=LOAD Jun 25 14:37:44.228000 audit[4591]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=4582 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:44.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637373764363631353234626632653138666338353562336466376530 Jun 25 14:37:44.228000 audit: BPF prog-id=178 op=LOAD Jun 25 14:37:44.228000 audit[4591]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=4582 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:44.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637373764363631353234626632653138666338353562336466376530 Jun 25 14:37:44.228000 audit: BPF prog-id=178 op=UNLOAD Jun 25 14:37:44.228000 audit: BPF prog-id=177 op=UNLOAD Jun 25 14:37:44.228000 audit: BPF prog-id=179 op=LOAD Jun 25 14:37:44.228000 audit[4591]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=4582 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:44.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637373764363631353234626632653138666338353562336466376530 Jun 25 14:37:44.230266 systemd-resolved[1184]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 14:37:44.247434 containerd[1245]: time="2024-06-25T14:37:44.247378562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c76b-6v7rn,Uid:f688e211-866f-4999-ab9a-b10156be11a0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4\"" Jun 25 14:37:44.249273 containerd[1245]: time="2024-06-25T14:37:44.249226843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 14:37:44.316000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7757 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:44.316000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4002153e00 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:37:44.316000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:37:44.316000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:37:44.316000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4003595160 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:37:44.316000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:37:45.467771 systemd-networkd[1078]: calia238e765d18: Gained IPv6LL Jun 25 14:37:45.999169 containerd[1245]: time="2024-06-25T14:37:45.999124621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:45.999715 containerd[1245]: time="2024-06-25T14:37:45.999676461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jun 25 14:37:46.002723 containerd[1245]: time="2024-06-25T14:37:46.001561262Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:46.006041 containerd[1245]: time="2024-06-25T14:37:46.006004982Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:46.010825 containerd[1245]: time="2024-06-25T14:37:46.010452063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:37:46.013625 containerd[1245]: time="2024-06-25T14:37:46.013080183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 1.76381226s" Jun 25 14:37:46.013625 containerd[1245]: time="2024-06-25T14:37:46.013126623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 14:37:46.018050 containerd[1245]: time="2024-06-25T14:37:46.017410144Z" level=info msg="CreateContainer within sandbox \"6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 14:37:46.036119 containerd[1245]: time="2024-06-25T14:37:46.036068867Z" level=info msg="CreateContainer within sandbox \"6777d661524bf2e18fc855b3df7e0f2908feb8fad66d5a1698351dd947a67ea4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"768a2fa9ea67b13774c154c9053c583da5370cdedcb9bfbc482b92bb4796e93a\"" Jun 25 14:37:46.037800 containerd[1245]: time="2024-06-25T14:37:46.036952627Z" level=info msg="StartContainer for \"768a2fa9ea67b13774c154c9053c583da5370cdedcb9bfbc482b92bb4796e93a\"" Jun 25 14:37:46.085259 systemd[1]: Started cri-containerd-768a2fa9ea67b13774c154c9053c583da5370cdedcb9bfbc482b92bb4796e93a.scope - libcontainer container 768a2fa9ea67b13774c154c9053c583da5370cdedcb9bfbc482b92bb4796e93a. Jun 25 14:37:46.093000 audit: BPF prog-id=180 op=LOAD Jun 25 14:37:46.094000 audit: BPF prog-id=181 op=LOAD Jun 25 14:37:46.094000 audit[4638]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001b18b0 a2=78 a3=0 items=0 ppid=4582 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:46.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736386132666139656136376231333737346331353463393035336335 Jun 25 14:37:46.094000 audit: BPF prog-id=182 op=LOAD Jun 25 14:37:46.094000 audit[4638]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001b1640 a2=78 a3=0 items=0 ppid=4582 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:46.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736386132666139656136376231333737346331353463393035336335 Jun 25 14:37:46.094000 audit: BPF prog-id=182 op=UNLOAD Jun 25 14:37:46.094000 audit: BPF prog-id=181 op=UNLOAD Jun 25 14:37:46.094000 audit: BPF prog-id=183 op=LOAD Jun 25 14:37:46.094000 audit[4638]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001b1b10 a2=78 a3=0 items=0 ppid=4582 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:46.094000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736386132666139656136376231333737346331353463393035336335 Jun 25 14:37:46.167760 containerd[1245]: time="2024-06-25T14:37:46.167711767Z" level=info msg="StartContainer for \"768a2fa9ea67b13774c154c9053c583da5370cdedcb9bfbc482b92bb4796e93a\" returns successfully" Jun 25 14:37:46.521672 containerd[1245]: time="2024-06-25T14:37:46.521628062Z" level=info msg="StopPodSandbox for \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\"" Jun 25 14:37:46.613909 containerd[1245]: 2024-06-25 14:37:46.576 [WARNING][4691] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0", GenerateName:"calico-kube-controllers-567786b6b9-", Namespace:"calico-system", SelfLink:"", UID:"1ad4c167-f4c1-437b-b169-12ec098e308e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567786b6b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300", Pod:"calico-kube-controllers-567786b6b9-gh9kf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliee9850568fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:46.613909 containerd[1245]: 2024-06-25 14:37:46.577 [INFO][4691] k8s.go 608: Cleaning up netns ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:46.613909 containerd[1245]: 2024-06-25 14:37:46.577 [INFO][4691] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" iface="eth0" netns="" Jun 25 14:37:46.613909 containerd[1245]: 2024-06-25 14:37:46.577 [INFO][4691] k8s.go 615: Releasing IP address(es) ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:46.613909 containerd[1245]: 2024-06-25 14:37:46.577 [INFO][4691] utils.go 188: Calico CNI releasing IP address ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:46.613909 containerd[1245]: 2024-06-25 14:37:46.600 [INFO][4699] ipam_plugin.go 411: Releasing address using handleID ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" HandleID="k8s-pod-network.e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Workload="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:46.613909 containerd[1245]: 2024-06-25 14:37:46.601 [INFO][4699] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:46.613909 containerd[1245]: 2024-06-25 14:37:46.601 [INFO][4699] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:46.613909 containerd[1245]: 2024-06-25 14:37:46.609 [WARNING][4699] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" HandleID="k8s-pod-network.e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Workload="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:46.613909 containerd[1245]: 2024-06-25 14:37:46.609 [INFO][4699] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" HandleID="k8s-pod-network.e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Workload="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:46.613909 containerd[1245]: 2024-06-25 14:37:46.610 [INFO][4699] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:46.613909 containerd[1245]: 2024-06-25 14:37:46.612 [INFO][4691] k8s.go 621: Teardown processing complete. ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:46.614740 containerd[1245]: time="2024-06-25T14:37:46.614688837Z" level=info msg="TearDown network for sandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\" successfully" Jun 25 14:37:46.614836 containerd[1245]: time="2024-06-25T14:37:46.614813477Z" level=info msg="StopPodSandbox for \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\" returns successfully" Jun 25 14:37:46.615895 containerd[1245]: time="2024-06-25T14:37:46.615394797Z" level=info msg="RemovePodSandbox for \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\"" Jun 25 14:37:46.630429 containerd[1245]: time="2024-06-25T14:37:46.620604478Z" level=info msg="Forcibly stopping sandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\"" Jun 25 14:37:46.644000 audit[4718]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=4718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:46.644000 audit[4718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffd6bac8a0 a2=0 a3=1 items=0 ppid=2410 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:46.644000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:46.646000 audit[4718]: NETFILTER_CFG table=nat:125 family=2 entries=44 op=nft_register_rule pid=4718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:46.646000 audit[4718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14988 a0=3 a1=ffffd6bac8a0 a2=0 a3=1 items=0 ppid=2410 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:46.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:46.746719 kubelet[2249]: I0625 14:37:46.745921 2249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-96c76b-6v7rn" podStartSLOduration=3.979482356 podStartE2EDuration="5.745904097s" podCreationTimestamp="2024-06-25 14:37:41 +0000 UTC" firstStartedPulling="2024-06-25 14:37:44.248712003 +0000 UTC m=+57.813899230" lastFinishedPulling="2024-06-25 14:37:46.015133744 +0000 UTC m=+59.580320971" observedRunningTime="2024-06-25 14:37:46.745683457 +0000 UTC m=+60.310870684" watchObservedRunningTime="2024-06-25 14:37:46.745904097 +0000 UTC m=+60.311091324" Jun 25 14:37:46.751851 containerd[1245]: 2024-06-25 14:37:46.686 [WARNING][4724] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0", GenerateName:"calico-kube-controllers-567786b6b9-", Namespace:"calico-system", SelfLink:"", UID:"1ad4c167-f4c1-437b-b169-12ec098e308e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"567786b6b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae2e56f646cc9101459fc04e73018c3eb6d8c4c6581aa8484829e74d85380300", Pod:"calico-kube-controllers-567786b6b9-gh9kf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliee9850568fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:46.751851 containerd[1245]: 2024-06-25 14:37:46.686 [INFO][4724] k8s.go 608: Cleaning up netns ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:46.751851 containerd[1245]: 2024-06-25 14:37:46.686 [INFO][4724] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" iface="eth0" netns="" Jun 25 14:37:46.751851 containerd[1245]: 2024-06-25 14:37:46.686 [INFO][4724] k8s.go 615: Releasing IP address(es) ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:46.751851 containerd[1245]: 2024-06-25 14:37:46.686 [INFO][4724] utils.go 188: Calico CNI releasing IP address ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:46.751851 containerd[1245]: 2024-06-25 14:37:46.727 [INFO][4731] ipam_plugin.go 411: Releasing address using handleID ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" HandleID="k8s-pod-network.e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Workload="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:46.751851 containerd[1245]: 2024-06-25 14:37:46.727 [INFO][4731] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:46.751851 containerd[1245]: 2024-06-25 14:37:46.727 [INFO][4731] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:46.751851 containerd[1245]: 2024-06-25 14:37:46.741 [WARNING][4731] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" HandleID="k8s-pod-network.e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Workload="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:46.751851 containerd[1245]: 2024-06-25 14:37:46.741 [INFO][4731] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" HandleID="k8s-pod-network.e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Workload="localhost-k8s-calico--kube--controllers--567786b6b9--gh9kf-eth0" Jun 25 14:37:46.751851 containerd[1245]: 2024-06-25 14:37:46.747 [INFO][4731] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:46.751851 containerd[1245]: 2024-06-25 14:37:46.750 [INFO][4724] k8s.go 621: Teardown processing complete. ContainerID="e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c" Jun 25 14:37:46.752559 containerd[1245]: time="2024-06-25T14:37:46.752518218Z" level=info msg="TearDown network for sandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\" successfully" Jun 25 14:37:46.758740 containerd[1245]: time="2024-06-25T14:37:46.758695939Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:37:46.758985 containerd[1245]: time="2024-06-25T14:37:46.758942219Z" level=info msg="RemovePodSandbox \"e552b1abb9a637c97aadcae3ac5f2e2efaf84b43af6190e05def749397e9013c\" returns successfully" Jun 25 14:37:46.760226 containerd[1245]: time="2024-06-25T14:37:46.760196500Z" level=info msg="StopPodSandbox for \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\"" Jun 25 14:37:46.759000 audit[4740]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=4740 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:46.759000 audit[4740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffffd831fa0 a2=0 a3=1 items=0 ppid=2410 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:46.759000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:46.761000 audit[4740]: NETFILTER_CFG table=nat:127 family=2 entries=44 op=nft_register_rule pid=4740 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:46.761000 audit[4740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14988 a0=3 a1=fffffd831fa0 a2=0 a3=1 items=0 ppid=2410 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:46.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:46.843284 containerd[1245]: 2024-06-25 14:37:46.802 [WARNING][4756] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kfl4t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d97e3989-35c8-44ea-83c9-925e939d51bb", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2", Pod:"csi-node-driver-kfl4t", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3e2b024aeed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:46.843284 containerd[1245]: 2024-06-25 14:37:46.802 [INFO][4756] k8s.go 608: Cleaning up netns ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:46.843284 containerd[1245]: 2024-06-25 14:37:46.802 [INFO][4756] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" iface="eth0" netns="" Jun 25 14:37:46.843284 containerd[1245]: 2024-06-25 14:37:46.802 [INFO][4756] k8s.go 615: Releasing IP address(es) ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:46.843284 containerd[1245]: 2024-06-25 14:37:46.802 [INFO][4756] utils.go 188: Calico CNI releasing IP address ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:46.843284 containerd[1245]: 2024-06-25 14:37:46.823 [INFO][4764] ipam_plugin.go 411: Releasing address using handleID ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" HandleID="k8s-pod-network.fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Workload="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:46.843284 containerd[1245]: 2024-06-25 14:37:46.823 [INFO][4764] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:46.843284 containerd[1245]: 2024-06-25 14:37:46.823 [INFO][4764] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:46.843284 containerd[1245]: 2024-06-25 14:37:46.838 [WARNING][4764] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" HandleID="k8s-pod-network.fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Workload="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:46.843284 containerd[1245]: 2024-06-25 14:37:46.838 [INFO][4764] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" HandleID="k8s-pod-network.fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Workload="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:46.843284 containerd[1245]: 2024-06-25 14:37:46.839 [INFO][4764] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:46.843284 containerd[1245]: 2024-06-25 14:37:46.841 [INFO][4756] k8s.go 621: Teardown processing complete. ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:46.843825 containerd[1245]: time="2024-06-25T14:37:46.843790673Z" level=info msg="TearDown network for sandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\" successfully" Jun 25 14:37:46.843911 containerd[1245]: time="2024-06-25T14:37:46.843879593Z" level=info msg="StopPodSandbox for \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\" returns successfully" Jun 25 14:37:46.844431 containerd[1245]: time="2024-06-25T14:37:46.844401273Z" level=info msg="RemovePodSandbox for \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\"" Jun 25 14:37:46.844507 containerd[1245]: time="2024-06-25T14:37:46.844445793Z" level=info msg="Forcibly stopping sandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\"" Jun 25 14:37:46.915460 containerd[1245]: 2024-06-25 14:37:46.880 [WARNING][4785] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kfl4t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d97e3989-35c8-44ea-83c9-925e939d51bb", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"abf33b31d93b44c55161c44750e8bdd9a9ce88003cf5bdaa82dca1543dbddbb2", Pod:"csi-node-driver-kfl4t", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3e2b024aeed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:46.915460 containerd[1245]: 2024-06-25 14:37:46.880 [INFO][4785] k8s.go 608: Cleaning up netns ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:46.915460 containerd[1245]: 2024-06-25 14:37:46.880 [INFO][4785] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" iface="eth0" netns="" Jun 25 14:37:46.915460 containerd[1245]: 2024-06-25 14:37:46.881 [INFO][4785] k8s.go 615: Releasing IP address(es) ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:46.915460 containerd[1245]: 2024-06-25 14:37:46.881 [INFO][4785] utils.go 188: Calico CNI releasing IP address ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:46.915460 containerd[1245]: 2024-06-25 14:37:46.901 [INFO][4792] ipam_plugin.go 411: Releasing address using handleID ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" HandleID="k8s-pod-network.fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Workload="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:46.915460 containerd[1245]: 2024-06-25 14:37:46.901 [INFO][4792] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:46.915460 containerd[1245]: 2024-06-25 14:37:46.901 [INFO][4792] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:46.915460 containerd[1245]: 2024-06-25 14:37:46.909 [WARNING][4792] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" HandleID="k8s-pod-network.fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Workload="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:46.915460 containerd[1245]: 2024-06-25 14:37:46.909 [INFO][4792] ipam_plugin.go 439: Releasing address using workloadID ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" HandleID="k8s-pod-network.fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Workload="localhost-k8s-csi--node--driver--kfl4t-eth0" Jun 25 14:37:46.915460 containerd[1245]: 2024-06-25 14:37:46.911 [INFO][4792] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:46.915460 containerd[1245]: 2024-06-25 14:37:46.913 [INFO][4785] k8s.go 621: Teardown processing complete. ContainerID="fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a" Jun 25 14:37:46.916311 containerd[1245]: time="2024-06-25T14:37:46.915505484Z" level=info msg="TearDown network for sandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\" successfully" Jun 25 14:37:46.918266 containerd[1245]: time="2024-06-25T14:37:46.918162764Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:37:46.918266 containerd[1245]: time="2024-06-25T14:37:46.918229884Z" level=info msg="RemovePodSandbox \"fea3ebd098761e0c25e848e12012458cfafc19cd68ecdd34331dca1cd687445a\" returns successfully" Jun 25 14:37:46.930583 containerd[1245]: time="2024-06-25T14:37:46.928917926Z" level=info msg="StopPodSandbox for \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\"" Jun 25 14:37:47.014016 containerd[1245]: 2024-06-25 14:37:46.973 [WARNING][4814] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92fe97e2-6b14-42a5-83ef-fce155119efa", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f", Pod:"coredns-7db6d8ff4d-6snbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief5adfbae88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:47.014016 containerd[1245]: 2024-06-25 14:37:46.978 [INFO][4814] k8s.go 608: Cleaning up netns ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:47.014016 containerd[1245]: 2024-06-25 14:37:46.978 [INFO][4814] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" iface="eth0" netns="" Jun 25 14:37:47.014016 containerd[1245]: 2024-06-25 14:37:46.978 [INFO][4814] k8s.go 615: Releasing IP address(es) ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:47.014016 containerd[1245]: 2024-06-25 14:37:46.978 [INFO][4814] utils.go 188: Calico CNI releasing IP address ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:47.014016 containerd[1245]: 2024-06-25 14:37:47.000 [INFO][4821] ipam_plugin.go 411: Releasing address using handleID ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" HandleID="k8s-pod-network.f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Workload="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:47.014016 containerd[1245]: 2024-06-25 14:37:47.000 [INFO][4821] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:47.014016 containerd[1245]: 2024-06-25 14:37:47.000 [INFO][4821] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:47.014016 containerd[1245]: 2024-06-25 14:37:47.008 [WARNING][4821] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" HandleID="k8s-pod-network.f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Workload="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:47.014016 containerd[1245]: 2024-06-25 14:37:47.008 [INFO][4821] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" HandleID="k8s-pod-network.f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Workload="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:47.014016 containerd[1245]: 2024-06-25 14:37:47.010 [INFO][4821] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:47.014016 containerd[1245]: 2024-06-25 14:37:47.011 [INFO][4814] k8s.go 621: Teardown processing complete. ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:47.014613 containerd[1245]: time="2024-06-25T14:37:47.014060859Z" level=info msg="TearDown network for sandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\" successfully" Jun 25 14:37:47.014613 containerd[1245]: time="2024-06-25T14:37:47.014090659Z" level=info msg="StopPodSandbox for \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\" returns successfully" Jun 25 14:37:47.016128 containerd[1245]: time="2024-06-25T14:37:47.014971699Z" level=info msg="RemovePodSandbox for \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\"" Jun 25 14:37:47.016128 containerd[1245]: time="2024-06-25T14:37:47.015043339Z" level=info msg="Forcibly stopping sandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\"" Jun 25 14:37:47.099303 containerd[1245]: 2024-06-25 14:37:47.067 [WARNING][4845] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92fe97e2-6b14-42a5-83ef-fce155119efa", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5812f1bcced20f141e898d6451f51bf6ced224190aa688e485f1cb725701881f", Pod:"coredns-7db6d8ff4d-6snbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief5adfbae88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:47.099303 containerd[1245]: 2024-06-25 14:37:47.067 [INFO][4845] k8s.go 608: Cleaning up netns ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:47.099303 containerd[1245]: 2024-06-25 14:37:47.067 [INFO][4845] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" iface="eth0" netns="" Jun 25 14:37:47.099303 containerd[1245]: 2024-06-25 14:37:47.067 [INFO][4845] k8s.go 615: Releasing IP address(es) ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:47.099303 containerd[1245]: 2024-06-25 14:37:47.067 [INFO][4845] utils.go 188: Calico CNI releasing IP address ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:47.099303 containerd[1245]: 2024-06-25 14:37:47.085 [INFO][4853] ipam_plugin.go 411: Releasing address using handleID ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" HandleID="k8s-pod-network.f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Workload="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:47.099303 containerd[1245]: 2024-06-25 14:37:47.085 [INFO][4853] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:47.099303 containerd[1245]: 2024-06-25 14:37:47.085 [INFO][4853] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:47.099303 containerd[1245]: 2024-06-25 14:37:47.094 [WARNING][4853] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" HandleID="k8s-pod-network.f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Workload="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:47.099303 containerd[1245]: 2024-06-25 14:37:47.094 [INFO][4853] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" HandleID="k8s-pod-network.f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Workload="localhost-k8s-coredns--7db6d8ff4d--6snbw-eth0" Jun 25 14:37:47.099303 containerd[1245]: 2024-06-25 14:37:47.095 [INFO][4853] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:47.099303 containerd[1245]: 2024-06-25 14:37:47.097 [INFO][4845] k8s.go 621: Teardown processing complete. ContainerID="f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778" Jun 25 14:37:47.099303 containerd[1245]: time="2024-06-25T14:37:47.099263791Z" level=info msg="TearDown network for sandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\" successfully" Jun 25 14:37:47.102828 containerd[1245]: time="2024-06-25T14:37:47.102791792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:37:47.102897 containerd[1245]: time="2024-06-25T14:37:47.102855472Z" level=info msg="RemovePodSandbox \"f56a9af90160008780306ef63c0ce1fd5ae203a44fcf1596ef7f84fbebf50778\" returns successfully" Jun 25 14:37:47.103309 containerd[1245]: time="2024-06-25T14:37:47.103280112Z" level=info msg="StopPodSandbox for \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\"" Jun 25 14:37:47.177760 containerd[1245]: 2024-06-25 14:37:47.143 [WARNING][4875] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"77c6372f-63bb-45d5-91a8-a2813fbef04f", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694", Pod:"coredns-7db6d8ff4d-rn8b9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04415f7eb1b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:47.177760 containerd[1245]: 2024-06-25 14:37:47.143 [INFO][4875] k8s.go 608: Cleaning up netns ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:47.177760 containerd[1245]: 2024-06-25 14:37:47.143 [INFO][4875] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" iface="eth0" netns="" Jun 25 14:37:47.177760 containerd[1245]: 2024-06-25 14:37:47.143 [INFO][4875] k8s.go 615: Releasing IP address(es) ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:47.177760 containerd[1245]: 2024-06-25 14:37:47.144 [INFO][4875] utils.go 188: Calico CNI releasing IP address ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:47.177760 containerd[1245]: 2024-06-25 14:37:47.162 [INFO][4882] ipam_plugin.go 411: Releasing address using handleID ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" HandleID="k8s-pod-network.332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Workload="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:47.177760 containerd[1245]: 2024-06-25 14:37:47.162 [INFO][4882] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:47.177760 containerd[1245]: 2024-06-25 14:37:47.162 [INFO][4882] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:47.177760 containerd[1245]: 2024-06-25 14:37:47.171 [WARNING][4882] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" HandleID="k8s-pod-network.332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Workload="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:47.177760 containerd[1245]: 2024-06-25 14:37:47.171 [INFO][4882] ipam_plugin.go 439: Releasing address using workloadID ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" HandleID="k8s-pod-network.332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Workload="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:47.177760 containerd[1245]: 2024-06-25 14:37:47.173 [INFO][4882] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:47.177760 containerd[1245]: 2024-06-25 14:37:47.176 [INFO][4875] k8s.go 621: Teardown processing complete. ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:47.178374 containerd[1245]: time="2024-06-25T14:37:47.178335603Z" level=info msg="TearDown network for sandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\" successfully" Jun 25 14:37:47.178437 containerd[1245]: time="2024-06-25T14:37:47.178422403Z" level=info msg="StopPodSandbox for \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\" returns successfully" Jun 25 14:37:47.179010 containerd[1245]: time="2024-06-25T14:37:47.178953123Z" level=info msg="RemovePodSandbox for \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\"" Jun 25 14:37:47.179192 containerd[1245]: time="2024-06-25T14:37:47.179132963Z" level=info msg="Forcibly stopping sandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\"" Jun 25 14:37:47.271395 containerd[1245]: 2024-06-25 14:37:47.217 [WARNING][4906] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"77c6372f-63bb-45d5-91a8-a2813fbef04f", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 37, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4ad6d7e913d9c4fec07faa574588876cabfc32daaf587e6c6b72657cc536694", Pod:"coredns-7db6d8ff4d-rn8b9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04415f7eb1b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:37:47.271395 containerd[1245]: 2024-06-25 14:37:47.218 [INFO][4906] k8s.go 608: Cleaning up netns ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:47.271395 containerd[1245]: 2024-06-25 14:37:47.219 [INFO][4906] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" iface="eth0" netns="" Jun 25 14:37:47.271395 containerd[1245]: 2024-06-25 14:37:47.219 [INFO][4906] k8s.go 615: Releasing IP address(es) ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:47.271395 containerd[1245]: 2024-06-25 14:37:47.219 [INFO][4906] utils.go 188: Calico CNI releasing IP address ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:47.271395 containerd[1245]: 2024-06-25 14:37:47.248 [INFO][4914] ipam_plugin.go 411: Releasing address using handleID ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" HandleID="k8s-pod-network.332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Workload="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:47.271395 containerd[1245]: 2024-06-25 14:37:47.248 [INFO][4914] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:37:47.271395 containerd[1245]: 2024-06-25 14:37:47.248 [INFO][4914] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:37:47.271395 containerd[1245]: 2024-06-25 14:37:47.261 [WARNING][4914] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" HandleID="k8s-pod-network.332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Workload="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:47.271395 containerd[1245]: 2024-06-25 14:37:47.261 [INFO][4914] ipam_plugin.go 439: Releasing address using workloadID ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" HandleID="k8s-pod-network.332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Workload="localhost-k8s-coredns--7db6d8ff4d--rn8b9-eth0" Jun 25 14:37:47.271395 containerd[1245]: 2024-06-25 14:37:47.264 [INFO][4914] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:37:47.271395 containerd[1245]: 2024-06-25 14:37:47.267 [INFO][4906] k8s.go 621: Teardown processing complete. ContainerID="332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f" Jun 25 14:37:47.271869 containerd[1245]: time="2024-06-25T14:37:47.271453976Z" level=info msg="TearDown network for sandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\" successfully" Jun 25 14:37:47.303828 containerd[1245]: time="2024-06-25T14:37:47.303764541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:37:47.303969 containerd[1245]: time="2024-06-25T14:37:47.303892901Z" level=info msg="RemovePodSandbox \"332b3e8629936c85231fee11f62995955574563680708bd336e9d2038fa7882f\" returns successfully" Jun 25 14:37:47.750018 kubelet[2249]: I0625 14:37:47.749968 2249 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:37:48.547122 systemd[1]: Started sshd@18-10.0.0.122:22-10.0.0.1:49354.service - OpenSSH per-connection server daemon (10.0.0.1:49354). Jun 25 14:37:48.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.122:22-10.0.0.1:49354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:48.548129 kernel: kauditd_printk_skb: 82 callbacks suppressed Jun 25 14:37:48.548241 kernel: audit: type=1130 audit(1719326268.546:735): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.122:22-10.0.0.1:49354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:48.585000 audit[4943]: USER_ACCT pid=4943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:48.586949 sshd[4943]: Accepted publickey for core from 10.0.0.1 port 49354 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:48.587000 audit[4943]: CRED_ACQ pid=4943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:48.589673 sshd[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:48.592211 kernel: audit: type=1101 audit(1719326268.585:736): pid=4943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:48.592356 kernel: audit: type=1103 audit(1719326268.587:737): pid=4943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:48.592440 kernel: audit: type=1006 audit(1719326268.587:738): pid=4943 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jun 25 14:37:48.593852 kernel: audit: type=1300 audit(1719326268.587:738): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd8054d60 a2=3 a3=1 items=0 ppid=1 pid=4943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:48.587000 audit[4943]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd8054d60 a2=3 a3=1 items=0 ppid=1 pid=4943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:48.595624 systemd-logind[1235]: New session 19 of user core. Jun 25 14:37:48.587000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:48.597302 kernel: audit: type=1327 audit(1719326268.587:738): proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:48.607282 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 14:37:48.610000 audit[4943]: USER_START pid=4943 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:48.611000 audit[4945]: CRED_ACQ pid=4945 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:48.616214 kernel: audit: type=1105 audit(1719326268.610:739): pid=4943 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:48.616274 kernel: audit: type=1103 audit(1719326268.611:740): pid=4945 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:48.776812 sshd[4943]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:48.776000 audit[4943]: USER_END pid=4943 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:48.780157 systemd[1]: sshd@18-10.0.0.122:22-10.0.0.1:49354.service: Deactivated successfully. Jun 25 14:37:48.780967 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 14:37:48.777000 audit[4943]: CRED_DISP pid=4943 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:48.781518 systemd-logind[1235]: Session 19 logged out. Waiting for processes to exit. Jun 25 14:37:48.782297 systemd-logind[1235]: Removed session 19. Jun 25 14:37:48.784029 kernel: audit: type=1106 audit(1719326268.776:741): pid=4943 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:48.784096 kernel: audit: type=1104 audit(1719326268.777:742): pid=4943 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:48.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.122:22-10.0.0.1:49354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:50.139428 kubelet[2249]: I0625 14:37:50.139353 2249 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:37:50.261000 audit[4958]: NETFILTER_CFG table=filter:128 family=2 entries=9 op=nft_register_rule pid=4958 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:50.261000 audit[4958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc6fbbd50 a2=0 a3=1 items=0 ppid=2410 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:50.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:50.262000 audit[4958]: NETFILTER_CFG table=nat:129 family=2 entries=51 op=nft_register_chain pid=4958 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:50.262000 audit[4958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18564 a0=3 a1=ffffc6fbbd50 a2=0 a3=1 items=0 ppid=2410 pid=4958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:50.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:53.392000 audit[4960]: NETFILTER_CFG table=filter:130 family=2 entries=8 op=nft_register_rule pid=4960 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:53.392000 audit[4960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffed57d250 a2=0 a3=1 items=0 ppid=2410 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:53.392000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:53.395000 audit[4960]: NETFILTER_CFG table=nat:131 family=2 entries=58 op=nft_register_chain pid=4960 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:37:53.395000 audit[4960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20452 a0=3 a1=ffffed57d250 a2=0 a3=1 items=0 ppid=2410 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:53.395000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:37:53.792618 systemd[1]: Started sshd@19-10.0.0.122:22-10.0.0.1:37166.service - OpenSSH per-connection server daemon (10.0.0.1:37166). Jun 25 14:37:53.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.122:22-10.0.0.1:37166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:53.793305 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 14:37:53.793355 kernel: audit: type=1130 audit(1719326273.791:748): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.122:22-10.0.0.1:37166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:53.823000 audit[4962]: USER_ACCT pid=4962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:53.824436 sshd[4962]: Accepted publickey for core from 10.0.0.1 port 37166 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:53.825517 sshd[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:53.824000 audit[4962]: CRED_ACQ pid=4962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:53.828823 kernel: audit: type=1101 audit(1719326273.823:749): pid=4962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:53.828886 kernel: audit: type=1103 audit(1719326273.824:750): pid=4962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:53.828908 kernel: audit: type=1006 audit(1719326273.824:751): pid=4962 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jun 25 14:37:53.824000 audit[4962]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff04d9270 a2=3 a3=1 items=0 ppid=1 pid=4962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:53.832535 kernel: audit: type=1300 audit(1719326273.824:751): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff04d9270 a2=3 a3=1 items=0 ppid=1 pid=4962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:53.832596 kernel: audit: type=1327 audit(1719326273.824:751): proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:53.824000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:53.832923 systemd-logind[1235]: New session 20 of user core. Jun 25 14:37:53.844620 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 14:37:53.850000 audit[4962]: USER_START pid=4962 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:53.852000 audit[4964]: CRED_ACQ pid=4964 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:53.856522 kernel: audit: type=1105 audit(1719326273.850:752): pid=4962 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:53.856610 kernel: audit: type=1103 audit(1719326273.852:753): pid=4964 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:54.022586 sshd[4962]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:54.022000 audit[4962]: USER_END pid=4962 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:54.026547 systemd[1]: sshd@19-10.0.0.122:22-10.0.0.1:37166.service: Deactivated successfully. Jun 25 14:37:54.023000 audit[4962]: CRED_DISP pid=4962 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:54.027360 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 14:37:54.028898 kernel: audit: type=1106 audit(1719326274.022:754): pid=4962 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:54.028986 kernel: audit: type=1104 audit(1719326274.023:755): pid=4962 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:54.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.122:22-10.0.0.1:37166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:54.029383 systemd-logind[1235]: Session 20 logged out. Waiting for processes to exit. Jun 25 14:37:54.030469 systemd-logind[1235]: Removed session 20. Jun 25 14:37:59.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.122:22-10.0.0.1:57840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:59.036823 systemd[1]: Started sshd@20-10.0.0.122:22-10.0.0.1:57840.service - OpenSSH per-connection server daemon (10.0.0.1:57840). Jun 25 14:37:59.039547 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:37:59.039657 kernel: audit: type=1130 audit(1719326279.035:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.122:22-10.0.0.1:57840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:37:59.072000 audit[4983]: USER_ACCT pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:59.073590 sshd[4983]: Accepted publickey for core from 10.0.0.1 port 57840 ssh2: RSA SHA256:hWxi6SYOks8V7/NLXiiveGYFWDf9XKfJ+ThHS+GuebE Jun 25 14:37:59.076012 kernel: audit: type=1101 audit(1719326279.072:758): pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:59.075000 audit[4983]: CRED_ACQ pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:59.079278 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:37:59.080411 kernel: audit: type=1103 audit(1719326279.075:759): pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:59.080475 kernel: audit: type=1006 audit(1719326279.075:760): pid=4983 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jun 25 14:37:59.080501 kernel: audit: type=1300 audit(1719326279.075:760): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcaa39360 a2=3 a3=1 items=0 ppid=1 pid=4983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:59.075000 audit[4983]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcaa39360 a2=3 a3=1 items=0 ppid=1 pid=4983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:37:59.083646 kernel: audit: type=1327 audit(1719326279.075:760): proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:59.075000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:37:59.084137 systemd-logind[1235]: New session 21 of user core. Jun 25 14:37:59.098280 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 14:37:59.105000 audit[4983]: USER_START pid=4983 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:59.106000 audit[4985]: CRED_ACQ pid=4985 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:59.110841 kernel: audit: type=1105 audit(1719326279.105:761): pid=4983 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:59.110904 kernel: audit: type=1103 audit(1719326279.106:762): pid=4985 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:59.247825 sshd[4983]: pam_unix(sshd:session): session closed for user core Jun 25 14:37:59.248000 audit[4983]: USER_END pid=4983 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:59.249000 audit[4983]: CRED_DISP pid=4983 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:59.253119 systemd-logind[1235]: Session 21 logged out. Waiting for processes to exit. Jun 25 14:37:59.253341 systemd[1]: sshd@20-10.0.0.122:22-10.0.0.1:57840.service: Deactivated successfully. Jun 25 14:37:59.254237 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 14:37:59.254928 systemd-logind[1235]: Removed session 21. Jun 25 14:37:59.256722 kernel: audit: type=1106 audit(1719326279.248:763): pid=4983 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:59.256809 kernel: audit: type=1104 audit(1719326279.249:764): pid=4983 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 14:37:59.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.122:22-10.0.0.1:57840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:38:00.637000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:38:00.637000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:38:00.637000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=c a1=4001253200 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:38:00.637000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:38:00.637000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4001ab7040 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:38:00.637000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:38:00.637000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:38:00.637000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001a63520 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:38:00.637000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:38:00.637000 audit[2116]: AVC avc: denied { watch } for pid=2116 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7751 scontext=system_u:system_r:container_t:s0:c692,c882 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:38:00.637000 audit[2116]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001242aa0 a2=fc6 a3=0 items=0 ppid=1971 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c692,c882 key=(null) Jun 25 14:38:00.637000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269