Jun 25 16:23:19.914148 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:23:19.914165 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:23:19.914174 kernel: BIOS-provided physical RAM map: Jun 25 16:23:19.914179 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 25 16:23:19.914184 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jun 25 16:23:19.914191 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jun 25 16:23:19.914204 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jun 25 16:23:19.914212 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jun 25 16:23:19.914218 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jun 25 16:23:19.914224 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jun 25 16:23:19.914234 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jun 25 16:23:19.914241 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jun 25 16:23:19.914248 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jun 25 16:23:19.914254 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jun 25 16:23:19.914263 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jun 25 16:23:19.914270 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jun 25 16:23:19.914276 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jun 25 16:23:19.914281 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jun 25 16:23:19.914286 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jun 25 16:23:19.914291 kernel: NX (Execute Disable) protection: active Jun 25 16:23:19.914297 kernel: efi: EFI v2.70 by EDK II Jun 25 16:23:19.914304 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 Jun 25 16:23:19.914311 kernel: SMBIOS 2.8 present. Jun 25 16:23:19.914318 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jun 25 16:23:19.914325 kernel: Hypervisor detected: KVM Jun 25 16:23:19.914332 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 16:23:19.914339 kernel: kvm-clock: using sched offset of 4912410576 cycles Jun 25 16:23:19.914347 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 16:23:19.914353 kernel: tsc: Detected 2794.750 MHz processor Jun 25 16:23:19.914358 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:23:19.914364 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:23:19.914369 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jun 25 16:23:19.914375 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:23:19.914380 kernel: Using GB pages for direct mapping Jun 25 16:23:19.914386 kernel: Secure boot disabled Jun 25 16:23:19.914392 kernel: ACPI: Early table checksum verification disabled Jun 25 16:23:19.914398 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jun 25 16:23:19.914403 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jun 25 16:23:19.914409 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:23:19.914414 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:23:19.914423 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jun 25 16:23:19.914429 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:23:19.914436 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:23:19.914442 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:23:19.914447 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jun 25 16:23:19.914453 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jun 25 16:23:19.914459 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jun 25 16:23:19.914465 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jun 25 16:23:19.914471 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jun 25 16:23:19.914477 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jun 25 16:23:19.914483 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jun 25 16:23:19.914489 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jun 25 16:23:19.914495 kernel: No NUMA configuration found Jun 25 16:23:19.914500 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jun 25 16:23:19.914506 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jun 25 16:23:19.914525 kernel: Zone ranges: Jun 25 16:23:19.914531 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:23:19.914537 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jun 25 16:23:19.914543 kernel: Normal empty Jun 25 16:23:19.914550 kernel: Movable zone start for each node Jun 25 16:23:19.914556 kernel: Early memory node ranges Jun 25 16:23:19.914561 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 25 16:23:19.914567 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jun 25 16:23:19.914573 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jun 25 16:23:19.914579 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jun 25 16:23:19.914584 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jun 25 16:23:19.914590 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jun 25 16:23:19.914596 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jun 25 16:23:19.914603 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:23:19.914608 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 25 16:23:19.914614 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jun 25 16:23:19.914620 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:23:19.914626 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jun 25 16:23:19.914632 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jun 25 16:23:19.914637 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jun 25 16:23:19.914643 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 25 16:23:19.914649 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 16:23:19.914656 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:23:19.914662 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 16:23:19.914667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 16:23:19.914673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:23:19.914679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 16:23:19.914685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 16:23:19.914690 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:23:19.914696 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 16:23:19.914702 kernel: TSC deadline timer available Jun 25 16:23:19.914709 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jun 25 16:23:19.914714 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 25 16:23:19.914720 kernel: kvm-guest: setup PV sched yield Jun 25 16:23:19.914726 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jun 25 16:23:19.914743 kernel: Booting paravirtualized kernel on KVM Jun 25 16:23:19.914750 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:23:19.914773 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 25 16:23:19.914780 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u524288 Jun 25 16:23:19.914786 kernel: pcpu-alloc: s194792 r8192 d30488 u524288 alloc=1*2097152 Jun 25 16:23:19.914793 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 25 16:23:19.914800 kernel: kvm-guest: PV spinlocks enabled Jun 25 16:23:19.914808 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 16:23:19.914815 kernel: Fallback order for Node 0: 0 Jun 25 16:23:19.914824 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jun 25 16:23:19.914832 kernel: Policy zone: DMA32 Jun 25 16:23:19.914841 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:23:19.914850 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:23:19.914857 kernel: random: crng init done Jun 25 16:23:19.914864 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 16:23:19.914870 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:23:19.914877 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:23:19.914883 kernel: Memory: 2392504K/2567000K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 174236K reserved, 0K cma-reserved) Jun 25 16:23:19.914889 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 16:23:19.914895 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:23:19.914901 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:23:19.914907 kernel: Dynamic Preempt: voluntary Jun 25 16:23:19.914913 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:23:19.914920 kernel: rcu: RCU event tracing is enabled. Jun 25 16:23:19.914926 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 16:23:19.914932 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:23:19.914938 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:23:19.914944 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:23:19.914955 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:23:19.914962 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 16:23:19.914968 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 25 16:23:19.914974 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:23:19.914980 kernel: Console: colour dummy device 80x25 Jun 25 16:23:19.914987 kernel: printk: console [ttyS0] enabled Jun 25 16:23:19.914993 kernel: ACPI: Core revision 20220331 Jun 25 16:23:19.915000 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 25 16:23:19.915006 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:23:19.915012 kernel: x2apic enabled Jun 25 16:23:19.915018 kernel: Switched APIC routing to physical x2apic. Jun 25 16:23:19.915024 kernel: kvm-guest: setup PV IPIs Jun 25 16:23:19.915032 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 16:23:19.915038 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 16:23:19.915045 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jun 25 16:23:19.915052 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 25 16:23:19.915060 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 25 16:23:19.915069 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 25 16:23:19.915083 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:23:19.915092 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 16:23:19.915101 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:23:19.915112 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:23:19.915121 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 25 16:23:19.915129 kernel: RETBleed: Mitigation: untrained return thunk Jun 25 16:23:19.915138 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 16:23:19.915146 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 16:23:19.915155 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:23:19.915163 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:23:19.915172 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:23:19.915182 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:23:19.915190 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 16:23:19.915199 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:23:19.915207 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:23:19.915216 kernel: LSM: Security Framework initializing Jun 25 16:23:19.915224 kernel: SELinux: Initializing. Jun 25 16:23:19.915232 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 16:23:19.915241 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 16:23:19.915249 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 25 16:23:19.915260 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:23:19.915268 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 16:23:19.915277 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:23:19.915285 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 16:23:19.915294 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:23:19.915302 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 16:23:19.915310 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 25 16:23:19.915318 kernel: ... version: 0 Jun 25 16:23:19.915327 kernel: ... bit width: 48 Jun 25 16:23:19.915335 kernel: ... generic registers: 6 Jun 25 16:23:19.915345 kernel: ... value mask: 0000ffffffffffff Jun 25 16:23:19.915353 kernel: ... max period: 00007fffffffffff Jun 25 16:23:19.915362 kernel: ... fixed-purpose events: 0 Jun 25 16:23:19.915370 kernel: ... event mask: 000000000000003f Jun 25 16:23:19.915378 kernel: signal: max sigframe size: 1776 Jun 25 16:23:19.915387 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:23:19.915396 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:23:19.915404 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:23:19.915412 kernel: x86: Booting SMP configuration: Jun 25 16:23:19.915422 kernel: .... node #0, CPUs: #1 #2 #3 Jun 25 16:23:19.915429 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 16:23:19.915437 kernel: smpboot: Max logical packages: 1 Jun 25 16:23:19.915445 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jun 25 16:23:19.915453 kernel: devtmpfs: initialized Jun 25 16:23:19.915462 kernel: x86/mm: Memory block size: 128MB Jun 25 16:23:19.915470 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jun 25 16:23:19.915479 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jun 25 16:23:19.915488 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jun 25 16:23:19.915497 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jun 25 16:23:19.915507 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jun 25 16:23:19.915530 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:23:19.915538 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 16:23:19.915547 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:23:19.915555 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:23:19.915564 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:23:19.915572 kernel: audit: type=2000 audit(1719332599.895:1): state=initialized audit_enabled=0 res=1 Jun 25 16:23:19.915581 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:23:19.915592 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:23:19.915600 kernel: cpuidle: using governor menu Jun 25 16:23:19.915609 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:23:19.915618 kernel: dca service started, version 1.12.1 Jun 25 16:23:19.915626 kernel: PCI: Using configuration type 1 for base access Jun 25 16:23:19.915635 kernel: PCI: Using configuration type 1 for extended access Jun 25 16:23:19.915643 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:23:19.915652 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:23:19.915660 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:23:19.915670 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:23:19.915679 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:23:19.915687 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:23:19.915696 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:23:19.915704 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:23:19.915713 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:23:19.915721 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 16:23:19.915729 kernel: ACPI: Interpreter enabled Jun 25 16:23:19.915774 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 16:23:19.915782 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:23:19.915793 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:23:19.915801 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:23:19.915810 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 16:23:19.915818 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 16:23:19.915973 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:23:19.915988 kernel: acpiphp: Slot [3] registered Jun 25 16:23:19.915997 kernel: acpiphp: Slot [4] registered Jun 25 16:23:19.916008 kernel: acpiphp: Slot [5] registered Jun 25 16:23:19.916017 kernel: acpiphp: Slot [6] registered Jun 25 16:23:19.916025 kernel: acpiphp: Slot [7] registered Jun 25 16:23:19.916034 kernel: acpiphp: Slot [8] registered Jun 25 16:23:19.916042 kernel: acpiphp: Slot [9] registered Jun 25 16:23:19.916051 kernel: acpiphp: Slot [10] registered Jun 25 16:23:19.916059 kernel: acpiphp: Slot [11] registered Jun 25 16:23:19.916067 kernel: acpiphp: Slot [12] registered Jun 25 16:23:19.916076 kernel: acpiphp: Slot [13] registered Jun 25 16:23:19.916084 kernel: acpiphp: Slot [14] registered Jun 25 16:23:19.916094 kernel: acpiphp: Slot [15] registered Jun 25 16:23:19.916103 kernel: acpiphp: Slot [16] registered Jun 25 16:23:19.916111 kernel: acpiphp: Slot [17] registered Jun 25 16:23:19.916119 kernel: acpiphp: Slot [18] registered Jun 25 16:23:19.916128 kernel: acpiphp: Slot [19] registered Jun 25 16:23:19.916136 kernel: acpiphp: Slot [20] registered Jun 25 16:23:19.916144 kernel: acpiphp: Slot [21] registered Jun 25 16:23:19.916153 kernel: acpiphp: Slot [22] registered Jun 25 16:23:19.916161 kernel: acpiphp: Slot [23] registered Jun 25 16:23:19.916171 kernel: acpiphp: Slot [24] registered Jun 25 16:23:19.916180 kernel: acpiphp: Slot [25] registered Jun 25 16:23:19.916188 kernel: acpiphp: Slot [26] registered Jun 25 16:23:19.916197 kernel: acpiphp: Slot [27] registered Jun 25 16:23:19.916205 kernel: acpiphp: Slot [28] registered Jun 25 16:23:19.916213 kernel: acpiphp: Slot [29] registered Jun 25 16:23:19.920344 kernel: acpiphp: Slot [30] registered Jun 25 16:23:19.920354 kernel: acpiphp: Slot [31] registered Jun 25 16:23:19.920363 kernel: PCI host bridge to bus 0000:00 Jun 25 16:23:19.920482 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:23:19.920590 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 16:23:19.920673 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:23:19.920771 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jun 25 16:23:19.920857 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jun 25 16:23:19.920940 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 16:23:19.921100 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 16:23:19.921231 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 16:23:19.921332 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 16:23:19.921436 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jun 25 16:23:19.921543 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 16:23:19.921637 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 16:23:19.925579 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 16:23:19.925679 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 16:23:19.925814 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 16:23:19.925909 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 25 16:23:19.926001 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jun 25 16:23:19.926101 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jun 25 16:23:19.926194 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jun 25 16:23:19.926290 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jun 25 16:23:19.926387 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jun 25 16:23:19.926479 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jun 25 16:23:19.926585 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:23:19.926687 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 16:23:19.926798 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jun 25 16:23:19.926897 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jun 25 16:23:19.926989 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jun 25 16:23:19.927095 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 16:23:19.927188 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 16:23:19.927282 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jun 25 16:23:19.927373 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jun 25 16:23:19.927477 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jun 25 16:23:19.927584 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jun 25 16:23:19.927676 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jun 25 16:23:19.927819 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jun 25 16:23:19.927925 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jun 25 16:23:19.927962 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 16:23:19.927972 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 16:23:19.927981 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:23:19.927990 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 16:23:19.927999 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 16:23:19.928008 kernel: iommu: Default domain type: Translated Jun 25 16:23:19.928020 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:23:19.928029 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:23:19.928038 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:23:19.928046 kernel: PTP clock support registered Jun 25 16:23:19.928055 kernel: Registered efivars operations Jun 25 16:23:19.928064 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:23:19.928072 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:23:19.928081 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jun 25 16:23:19.928090 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jun 25 16:23:19.928101 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jun 25 16:23:19.928109 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jun 25 16:23:19.928203 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 16:23:19.928291 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 16:23:19.928378 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:23:19.928391 kernel: vgaarb: loaded Jun 25 16:23:19.928400 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 25 16:23:19.928409 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 25 16:23:19.928418 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 16:23:19.928430 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:23:19.928439 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:23:19.928447 kernel: pnp: PnP ACPI init Jun 25 16:23:19.928553 kernel: pnp 00:02: [dma 2] Jun 25 16:23:19.928567 kernel: pnp: PnP ACPI: found 6 devices Jun 25 16:23:19.928576 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:23:19.928585 kernel: NET: Registered PF_INET protocol family Jun 25 16:23:19.928594 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 16:23:19.928605 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 16:23:19.928614 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:23:19.928623 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:23:19.928631 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 16:23:19.928640 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 16:23:19.928649 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 16:23:19.928658 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 16:23:19.928666 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:23:19.928675 kernel: NET: Registered PF_XDP protocol family Jun 25 16:23:19.928784 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jun 25 16:23:19.929993 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jun 25 16:23:19.930081 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 16:23:19.930165 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 16:23:19.930247 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 16:23:19.930328 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jun 25 16:23:19.930410 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jun 25 16:23:19.930502 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 16:23:19.930614 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:23:19.930627 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:23:19.930636 kernel: Initialise system trusted keyrings Jun 25 16:23:19.930645 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 16:23:19.930653 kernel: Key type asymmetric registered Jun 25 16:23:19.930662 kernel: Asymmetric key parser 'x509' registered Jun 25 16:23:19.930671 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:23:19.930680 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:23:19.930691 kernel: io scheduler mq-deadline registered Jun 25 16:23:19.930700 kernel: io scheduler kyber registered Jun 25 16:23:19.930709 kernel: io scheduler bfq registered Jun 25 16:23:19.930717 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:23:19.930727 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 16:23:19.930748 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 25 16:23:19.930757 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 16:23:19.930766 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:23:19.930775 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:23:19.930787 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 16:23:19.930795 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:23:19.930806 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:23:19.930930 kernel: rtc_cmos 00:05: RTC can wake from S4 Jun 25 16:23:19.930948 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:23:19.931031 kernel: rtc_cmos 00:05: registered as rtc0 Jun 25 16:23:19.931115 kernel: rtc_cmos 00:05: setting system clock to 2024-06-25T16:23:19 UTC (1719332599) Jun 25 16:23:19.931203 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 25 16:23:19.931218 kernel: efifb: probing for efifb Jun 25 16:23:19.931228 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jun 25 16:23:19.931237 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jun 25 16:23:19.931246 kernel: efifb: scrolling: redraw Jun 25 16:23:19.931255 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jun 25 16:23:19.931264 kernel: Console: switching to colour frame buffer device 100x37 Jun 25 16:23:19.931273 kernel: fb0: EFI VGA frame buffer device Jun 25 16:23:19.931282 kernel: pstore: Registered efi as persistent store backend Jun 25 16:23:19.931292 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:23:19.931302 kernel: Segment Routing with IPv6 Jun 25 16:23:19.931311 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:23:19.931320 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:23:19.931329 kernel: Key type dns_resolver registered Jun 25 16:23:19.931338 kernel: IPI shorthand broadcast: enabled Jun 25 16:23:19.931347 kernel: sched_clock: Marking stable (612409463, 114668496)->(748460206, -21382247) Jun 25 16:23:19.931357 kernel: registered taskstats version 1 Jun 25 16:23:19.931368 kernel: Loading compiled-in X.509 certificates Jun 25 16:23:19.931378 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:23:19.931387 kernel: Key type .fscrypt registered Jun 25 16:23:19.931395 kernel: Key type fscrypt-provisioning registered Jun 25 16:23:19.931404 kernel: pstore: Using crash dump compression: deflate Jun 25 16:23:19.931413 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:23:19.931423 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:23:19.931432 kernel: ima: No architecture policies found Jun 25 16:23:19.931442 kernel: clk: Disabling unused clocks Jun 25 16:23:19.931451 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:23:19.931461 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:23:19.931470 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:23:19.931479 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:23:19.931488 kernel: Run /init as init process Jun 25 16:23:19.931497 kernel: with arguments: Jun 25 16:23:19.931506 kernel: /init Jun 25 16:23:19.931526 kernel: with environment: Jun 25 16:23:19.931537 kernel: HOME=/ Jun 25 16:23:19.931546 kernel: TERM=linux Jun 25 16:23:19.931555 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:23:19.931567 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:23:19.931580 systemd[1]: Detected virtualization kvm. Jun 25 16:23:19.931590 systemd[1]: Detected architecture x86-64. Jun 25 16:23:19.931600 systemd[1]: Running in initrd. Jun 25 16:23:19.931611 systemd[1]: No hostname configured, using default hostname. Jun 25 16:23:19.931620 systemd[1]: Hostname set to . Jun 25 16:23:19.931631 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:23:19.931640 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:23:19.931650 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:23:19.931660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:23:19.931669 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:23:19.931679 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:23:19.931690 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:23:19.931699 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:23:19.931710 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:23:19.931719 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:23:19.931742 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:23:19.931755 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:23:19.931766 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:23:19.931780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:23:19.931790 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:23:19.931800 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:23:19.931810 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:23:19.931820 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:23:19.931829 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:23:19.931839 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:23:19.931849 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:23:19.931859 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:23:19.931871 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:23:19.931881 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:23:19.931893 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:23:19.931903 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:23:19.931913 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:23:19.931923 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:23:19.931933 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:23:19.931943 kernel: audit: type=1130 audit(1719332599.915:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:19.931956 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:23:19.931966 kernel: audit: type=1130 audit(1719332599.922:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:19.931975 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:23:19.931989 systemd-journald[195]: Journal started Jun 25 16:23:19.932042 systemd-journald[195]: Runtime Journal (/run/log/journal/7860f2aa453044208b93f14bf8ac2e33) is 6.0M, max 48.3M, 42.3M free. Jun 25 16:23:19.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:19.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:19.902635 systemd-modules-load[196]: Inserted module 'overlay' Jun 25 16:23:19.935374 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:23:19.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:19.936828 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:23:19.938496 dracut-cmdline[210]: dracut-dracut-053 Jun 25 16:23:19.941861 kernel: audit: type=1130 audit(1719332599.934:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:19.941886 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:23:19.941928 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:23:19.943026 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:23:19.960168 kernel: Bridge firewalling registered Jun 25 16:23:19.958833 systemd-modules-load[196]: Inserted module 'br_netfilter' Jun 25 16:23:19.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:19.962716 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:23:19.966402 kernel: audit: type=1130 audit(1719332599.960:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:19.966434 kernel: audit: type=1334 audit(1719332599.961:6): prog-id=6 op=LOAD Jun 25 16:23:19.961000 audit: BPF prog-id=6 op=LOAD Jun 25 16:23:19.982751 kernel: SCSI subsystem initialized Jun 25 16:23:19.993117 systemd-resolved[243]: Positive Trust Anchors: Jun 25 16:23:19.993140 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:23:19.993170 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:23:20.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:19.996180 systemd-resolved[243]: Defaulting to hostname 'linux'. Jun 25 16:23:20.008468 kernel: audit: type=1130 audit(1719332600.002:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:19.997605 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:23:20.003575 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:23:20.031353 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:23:20.031389 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:23:20.031401 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:23:20.034428 systemd-modules-load[196]: Inserted module 'dm_multipath' Jun 25 16:23:20.036979 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:23:20.035175 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:23:20.040971 kernel: audit: type=1130 audit(1719332600.036:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:20.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:20.048767 kernel: iscsi: registered transport (tcp) Jun 25 16:23:20.055017 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:23:20.062518 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:23:20.067770 kernel: audit: type=1130 audit(1719332600.063:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:20.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:20.078766 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:23:20.078794 kernel: QLogic iSCSI HBA Driver Jun 25 16:23:20.118617 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:23:20.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:20.123782 kernel: audit: type=1130 audit(1719332600.120:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:20.127955 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:23:20.209792 kernel: raid6: avx2x4 gen() 21549 MB/s Jun 25 16:23:20.226778 kernel: raid6: avx2x2 gen() 25044 MB/s Jun 25 16:23:20.244034 kernel: raid6: avx2x1 gen() 24733 MB/s Jun 25 16:23:20.244094 kernel: raid6: using algorithm avx2x2 gen() 25044 MB/s Jun 25 16:23:20.261937 kernel: raid6: .... xor() 18984 MB/s, rmw enabled Jun 25 16:23:20.261975 kernel: raid6: using avx2x2 recovery algorithm Jun 25 16:23:20.265771 kernel: xor: automatically using best checksumming function avx Jun 25 16:23:20.407792 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:23:20.416715 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:23:20.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:20.419000 audit: BPF prog-id=7 op=LOAD Jun 25 16:23:20.419000 audit: BPF prog-id=8 op=LOAD Jun 25 16:23:20.429103 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:23:20.445178 systemd-udevd[398]: Using default interface naming scheme 'v252'. Jun 25 16:23:20.449781 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:23:20.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:20.465992 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:23:20.477980 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jun 25 16:23:20.509846 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:23:20.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:20.524110 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:23:20.566012 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:23:20.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:20.600874 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 25 16:23:20.636827 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:23:20.636847 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:23:20.636870 kernel: AES CTR mode by8 optimization enabled Jun 25 16:23:20.636881 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 16:23:20.637007 kernel: libata version 3.00 loaded. Jun 25 16:23:20.637019 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 16:23:20.637137 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 16:23:20.637149 kernel: GPT:9289727 != 19775487 Jun 25 16:23:20.637160 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 16:23:20.637170 kernel: GPT:9289727 != 19775487 Jun 25 16:23:20.637183 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 16:23:20.637194 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:23:20.637204 kernel: scsi host0: ata_piix Jun 25 16:23:20.637574 kernel: scsi host1: ata_piix Jun 25 16:23:20.637705 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jun 25 16:23:20.637719 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jun 25 16:23:20.800595 kernel: ata2: found unknown device (class 0) Jun 25 16:23:20.800670 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 25 16:23:20.803858 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 25 16:23:20.850779 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (447) Jun 25 16:23:20.862209 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 16:23:20.865077 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (457) Jun 25 16:23:20.866125 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:23:20.871695 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:23:20.876108 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 16:23:20.877208 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 16:23:20.884792 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 25 16:23:20.923599 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 16:23:20.923622 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 25 16:23:20.895961 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:23:20.947635 disk-uuid[534]: Primary Header is updated. Jun 25 16:23:20.947635 disk-uuid[534]: Secondary Entries is updated. Jun 25 16:23:20.947635 disk-uuid[534]: Secondary Header is updated. Jun 25 16:23:20.953766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:23:20.957779 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:23:21.965774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:23:21.966159 disk-uuid[535]: The operation has completed successfully. Jun 25 16:23:21.991040 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:23:22.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:22.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:21.991121 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:23:22.041895 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:23:22.044831 sh[550]: Success Jun 25 16:23:22.070764 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 25 16:23:22.091984 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:23:22.110500 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:23:22.113912 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:23:22.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:22.120169 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:23:22.120194 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:23:22.120203 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:23:22.121197 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:23:22.121939 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:23:22.126149 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:23:22.126291 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:23:22.150874 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:23:22.153518 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:23:22.160242 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:23:22.160272 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:23:22.160285 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:23:22.166534 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:23:22.168406 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:23:22.205371 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:23:22.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:22.277000 audit: BPF prog-id=9 op=LOAD Jun 25 16:23:22.286899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:23:22.325129 systemd-networkd[729]: lo: Link UP Jun 25 16:23:22.325137 systemd-networkd[729]: lo: Gained carrier Jun 25 16:23:22.325493 systemd-networkd[729]: Enumeration completed Jun 25 16:23:22.325674 systemd-networkd[729]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:23:22.325677 systemd-networkd[729]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:23:22.328342 systemd-networkd[729]: eth0: Link UP Jun 25 16:23:22.328345 systemd-networkd[729]: eth0: Gained carrier Jun 25 16:23:22.328349 systemd-networkd[729]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:23:22.332888 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:23:22.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:22.339604 systemd[1]: Reached target network.target - Network. Jun 25 16:23:22.350793 systemd-networkd[729]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 16:23:22.358330 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:23:22.360880 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:23:22.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:22.363683 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:23:22.366305 iscsid[734]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:23:22.366305 iscsid[734]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:23:22.366305 iscsid[734]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:23:22.366305 iscsid[734]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:23:22.465819 iscsid[734]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:23:22.465819 iscsid[734]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:23:22.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:22.373247 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:23:22.480878 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:23:22.534848 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:23:22.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:22.537527 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:23:22.547593 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:23:22.550219 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:23:22.562914 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:23:22.585260 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:23:22.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:22.587603 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:23:22.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:22.590839 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:23:22.629965 ignition[749]: Ignition 2.15.0 Jun 25 16:23:22.629977 ignition[749]: Stage: fetch-offline Jun 25 16:23:22.630016 ignition[749]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:23:22.630023 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:23:22.630124 ignition[749]: parsed url from cmdline: "" Jun 25 16:23:22.630128 ignition[749]: no config URL provided Jun 25 16:23:22.630135 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:23:22.630143 ignition[749]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:23:22.630172 ignition[749]: op(1): [started] loading QEMU firmware config module Jun 25 16:23:22.630178 ignition[749]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 16:23:22.636655 ignition[749]: op(1): [finished] loading QEMU firmware config module Jun 25 16:23:22.677586 ignition[749]: parsing config with SHA512: 7d4594a4ebc677f93df7af4ae535195de875ee34f28ad4e9f034f3029c9c19fd1d656cc230f1a9a226db42704fc25088d460647feef2570f14789de616370562 Jun 25 16:23:22.681358 unknown[749]: fetched base config from "system" Jun 25 16:23:22.681376 unknown[749]: fetched user config from "qemu" Jun 25 16:23:22.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:22.682041 ignition[749]: fetch-offline: fetch-offline passed Jun 25 16:23:22.683395 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:23:22.682466 ignition[749]: Ignition finished successfully Jun 25 16:23:22.750594 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 16:23:22.761917 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:23:22.775380 ignition[760]: Ignition 2.15.0 Jun 25 16:23:22.775390 ignition[760]: Stage: kargs Jun 25 16:23:22.775482 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:23:22.775505 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:23:22.778115 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:23:22.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:22.776433 ignition[760]: kargs: kargs passed Jun 25 16:23:22.776478 ignition[760]: Ignition finished successfully Jun 25 16:23:22.790951 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:23:22.801804 ignition[769]: Ignition 2.15.0 Jun 25 16:23:22.801813 ignition[769]: Stage: disks Jun 25 16:23:22.801926 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:23:22.801934 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:23:22.802853 ignition[769]: disks: disks passed Jun 25 16:23:22.802892 ignition[769]: Ignition finished successfully Jun 25 16:23:22.814758 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:23:22.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:22.816032 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:23:22.816087 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:23:22.816294 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:23:22.816494 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:23:22.816675 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:23:22.829887 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:23:22.852984 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 16:23:23.066251 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:23:23.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:23.074910 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:23:23.160780 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:23:23.161286 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:23:23.162269 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:23:23.179936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:23:23.182299 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:23:23.184180 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 16:23:23.189378 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785) Jun 25 16:23:23.184220 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:23:23.201601 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:23:23.201623 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:23:23.201632 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:23:23.184240 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:23:23.203044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:23:23.208004 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:23:23.211006 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:23:23.244025 initrd-setup-root[809]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:23:23.250851 initrd-setup-root[816]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:23:23.254877 initrd-setup-root[823]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:23:23.258334 initrd-setup-root[830]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:23:23.322219 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:23:23.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:23.328947 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:23:23.331676 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:23:23.336909 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:23:23.338217 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:23:23.348233 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:23:23.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:23.400824 ignition[899]: INFO : Ignition 2.15.0 Jun 25 16:23:23.400824 ignition[899]: INFO : Stage: mount Jun 25 16:23:23.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:23.405157 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:23:23.405157 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:23:23.405157 ignition[899]: INFO : mount: mount passed Jun 25 16:23:23.405157 ignition[899]: INFO : Ignition finished successfully Jun 25 16:23:23.402972 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:23:23.411181 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:23:24.170952 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:23:24.218768 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (908) Jun 25 16:23:24.221809 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:23:24.221856 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:23:24.221867 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:23:24.225552 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:23:24.249542 ignition[926]: INFO : Ignition 2.15.0 Jun 25 16:23:24.249542 ignition[926]: INFO : Stage: files Jun 25 16:23:24.251444 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:23:24.251444 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:23:24.251444 ignition[926]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:23:24.255133 ignition[926]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:23:24.255133 ignition[926]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:23:24.258061 ignition[926]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:23:24.259412 ignition[926]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:23:24.261072 unknown[926]: wrote ssh authorized keys file for user: core Jun 25 16:23:24.262152 ignition[926]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:23:24.264242 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:23:24.266233 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:23:24.290960 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 16:23:24.295860 systemd-networkd[729]: eth0: Gained IPv6LL Jun 25 16:23:24.381212 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:23:24.381212 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:23:24.385709 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jun 25 16:23:24.739295 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 16:23:25.222814 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jun 25 16:23:25.222814 ignition[926]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 16:23:25.226902 ignition[926]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:23:25.226902 ignition[926]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:23:25.226902 ignition[926]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 16:23:25.233589 ignition[926]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 25 16:23:25.233589 ignition[926]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:23:25.233589 ignition[926]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:23:25.233589 ignition[926]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 25 16:23:25.233589 ignition[926]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 16:23:25.233589 ignition[926]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:23:25.252533 ignition[926]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:23:25.267769 ignition[926]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 16:23:25.267769 ignition[926]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:23:25.267769 ignition[926]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:23:25.267769 ignition[926]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:23:25.267769 ignition[926]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:23:25.267769 ignition[926]: INFO : files: files passed Jun 25 16:23:25.267769 ignition[926]: INFO : Ignition finished successfully Jun 25 16:23:25.278469 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:23:25.283948 kernel: kauditd_printk_skb: 24 callbacks suppressed Jun 25 16:23:25.283972 kernel: audit: type=1130 audit(1719332605.279:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.288915 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:23:25.291965 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:23:25.294498 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:23:25.295583 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:23:25.297785 initrd-setup-root-after-ignition[951]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 16:23:25.323778 kernel: audit: type=1130 audit(1719332605.298:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.323801 kernel: audit: type=1131 audit(1719332605.298:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.319774 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:23:25.329039 kernel: audit: type=1130 audit(1719332605.323:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.329053 initrd-setup-root-after-ignition[953]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:23:25.329053 initrd-setup-root-after-ignition[953]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:23:25.323944 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:23:25.333456 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:23:25.356902 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:23:25.368589 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:23:25.368678 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:23:25.378285 kernel: audit: type=1130 audit(1719332605.370:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.378307 kernel: audit: type=1131 audit(1719332605.370:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.370992 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:23:25.378290 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:23:25.379395 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:23:25.380141 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:23:25.389377 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:23:25.394786 kernel: audit: type=1130 audit(1719332605.388:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.390204 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:23:25.397715 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:23:25.398950 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:23:25.429409 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:23:25.431453 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:23:25.437400 kernel: audit: type=1131 audit(1719332605.433:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.431568 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:23:25.433487 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:23:25.437563 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:23:25.439561 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:23:25.455158 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:23:25.457082 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:23:25.459207 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:23:25.461332 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:23:25.463486 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:23:25.465563 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:23:25.467751 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:23:25.469782 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:23:25.477719 kernel: audit: type=1131 audit(1719332605.473:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.471529 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:23:25.471630 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:23:25.493748 kernel: audit: type=1131 audit(1719332605.479:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.473853 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:23:25.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.477763 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:23:25.477850 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:23:25.479904 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:23:25.479991 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:23:25.493869 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:23:25.506820 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:23:25.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.506940 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:23:25.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.508949 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:23:25.510836 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:23:25.512983 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:23:25.513048 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:23:25.515083 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:23:25.515172 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:23:25.517008 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:23:25.517092 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:23:25.601775 ignition[971]: INFO : Ignition 2.15.0 Jun 25 16:23:25.601775 ignition[971]: INFO : Stage: umount Jun 25 16:23:25.601775 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:23:25.601775 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:23:25.601775 ignition[971]: INFO : umount: umount passed Jun 25 16:23:25.601775 ignition[971]: INFO : Ignition finished successfully Jun 25 16:23:25.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.531931 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:23:25.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.557397 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:23:25.559827 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:23:25.601785 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:23:25.601894 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:23:25.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.603166 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:23:25.603249 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:23:25.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.606294 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:23:25.606372 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:23:25.608092 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:23:25.608157 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:23:25.610864 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:23:25.611350 systemd[1]: Stopped target network.target - Network. Jun 25 16:23:25.612679 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:23:25.612707 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:23:25.612779 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:23:25.643000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:23:25.612811 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:23:25.612968 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:23:25.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.612996 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:23:25.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.613125 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:23:25.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.613149 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:23:25.613355 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:23:25.613696 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:23:25.613969 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:23:25.614034 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:23:25.622817 systemd-networkd[729]: eth0: DHCPv6 lease lost Jun 25 16:23:25.623841 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:23:25.623924 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:23:25.701000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:23:25.626288 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:23:25.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.626374 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:23:25.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.632183 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:23:25.632207 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:23:25.643832 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:23:25.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.645548 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:23:25.645604 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:23:25.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.663658 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:23:25.663695 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:23:25.665675 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:23:25.665707 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:23:25.666842 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:23:25.666875 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:23:25.680369 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:23:25.684248 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:23:25.684333 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:23:25.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.702236 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:23:25.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.702376 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:23:25.731452 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:23:25.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:25.731526 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:23:25.733816 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:23:25.733847 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:23:25.736040 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:23:25.736069 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:23:25.738431 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:23:25.871555 iscsid[734]: iscsid shutting down. Jun 25 16:23:25.738465 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:23:25.740742 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:23:25.740774 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:23:25.772955 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:23:25.772994 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:23:25.871757 systemd-journald[195]: Received SIGTERM from PID 1 (n/a). Jun 25 16:23:25.783006 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:23:25.784300 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 16:23:25.784373 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:23:25.820037 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:23:25.820085 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:23:25.821230 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:23:25.821262 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:23:25.824014 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 16:23:25.824455 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:23:25.824533 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:23:25.824770 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:23:25.824838 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:23:25.825058 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:23:25.825118 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:23:25.825147 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:23:25.826035 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:23:25.833239 systemd[1]: Switching root. Jun 25 16:23:25.872167 systemd-journald[195]: Journal stopped Jun 25 16:23:26.903650 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:23:26.903704 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:23:26.903721 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:23:26.904396 kernel: SELinux: policy capability open_perms=1 Jun 25 16:23:26.904419 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:23:26.904432 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:23:26.904445 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:23:26.904457 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:23:26.904470 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:23:26.904486 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:23:26.904501 systemd[1]: Successfully loaded SELinux policy in 68.438ms. Jun 25 16:23:26.904530 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.983ms. Jun 25 16:23:26.904546 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:23:26.904560 systemd[1]: Detected virtualization kvm. Jun 25 16:23:26.904580 systemd[1]: Detected architecture x86-64. Jun 25 16:23:26.904594 systemd[1]: Detected first boot. Jun 25 16:23:26.904612 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:23:26.904625 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:23:26.904639 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:23:26.904652 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:23:26.904666 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 16:23:26.904679 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 16:23:26.904697 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 16:23:26.904716 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:23:26.904730 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:23:26.904762 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:23:26.904776 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:23:26.904791 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:23:26.904806 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:23:26.904821 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:23:26.904835 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:23:26.904849 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:23:26.904864 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:23:26.904877 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:23:26.904895 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:23:26.904909 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 16:23:26.904928 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 16:23:26.904947 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 16:23:26.904962 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:23:26.904979 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:23:26.904995 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:23:26.905013 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:23:26.905028 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:23:26.905043 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:23:26.905058 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:23:26.905072 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:23:26.905086 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:23:26.905100 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:23:26.905115 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:23:26.905131 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:23:26.905147 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:23:26.905162 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:23:26.905175 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:23:26.905188 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:23:26.905201 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:23:26.905215 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:23:26.905228 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:23:26.905241 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:23:26.905255 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:23:26.905272 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:23:26.905285 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:23:26.905299 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:23:26.905313 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:23:26.905337 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:23:26.905352 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:23:26.905366 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:23:26.905386 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:23:26.905402 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 16:23:26.905415 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 16:23:26.905429 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 16:23:26.905443 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 16:23:26.905456 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 16:23:26.905469 kernel: loop: module loaded Jun 25 16:23:26.905482 systemd[1]: systemd-journald.service: Consumed 1.201s CPU time. Jun 25 16:23:26.905495 kernel: fuse: init (API version 7.37) Jun 25 16:23:26.905508 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:23:26.905524 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:23:26.905539 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:23:26.905552 kernel: ACPI: bus type drm_connector registered Jun 25 16:23:26.905567 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:23:26.905584 systemd-journald[1083]: Journal started Jun 25 16:23:26.905631 systemd-journald[1083]: Runtime Journal (/run/log/journal/7860f2aa453044208b93f14bf8ac2e33) is 6.0M, max 48.3M, 42.3M free. Jun 25 16:23:25.948000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:23:26.319000 audit: BPF prog-id=10 op=LOAD Jun 25 16:23:26.319000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:23:26.319000 audit: BPF prog-id=11 op=LOAD Jun 25 16:23:26.319000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:23:26.730000 audit: BPF prog-id=12 op=LOAD Jun 25 16:23:26.730000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:23:26.730000 audit: BPF prog-id=13 op=LOAD Jun 25 16:23:26.730000 audit: BPF prog-id=14 op=LOAD Jun 25 16:23:26.730000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:23:26.730000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:23:26.730000 audit: BPF prog-id=15 op=LOAD Jun 25 16:23:26.730000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:23:26.731000 audit: BPF prog-id=16 op=LOAD Jun 25 16:23:26.731000 audit: BPF prog-id=17 op=LOAD Jun 25 16:23:26.731000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:23:26.731000 audit: BPF prog-id=14 op=UNLOAD Jun 25 16:23:26.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.739000 audit: BPF prog-id=15 op=UNLOAD Jun 25 16:23:26.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.878000 audit: BPF prog-id=18 op=LOAD Jun 25 16:23:26.878000 audit: BPF prog-id=19 op=LOAD Jun 25 16:23:26.878000 audit: BPF prog-id=20 op=LOAD Jun 25 16:23:26.878000 audit: BPF prog-id=16 op=UNLOAD Jun 25 16:23:26.878000 audit: BPF prog-id=17 op=UNLOAD Jun 25 16:23:26.901000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:23:26.901000 audit[1083]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff011e7b10 a2=4000 a3=7fff011e7bac items=0 ppid=1 pid=1083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:26.901000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:23:26.718605 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:23:26.718615 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 16:23:26.732452 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 16:23:26.732766 systemd[1]: systemd-journald.service: Consumed 1.201s CPU time. Jun 25 16:23:26.910592 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:23:26.913578 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 16:23:26.913626 systemd[1]: Stopped verity-setup.service. Jun 25 16:23:26.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.916758 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:23:26.919900 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:23:26.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.920509 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:23:26.921790 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:23:26.923103 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:23:26.924345 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:23:26.925696 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:23:26.927029 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:23:26.928379 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:23:26.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.929897 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:23:26.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.931471 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:23:26.931595 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:23:26.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.933129 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:23:26.933255 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:23:26.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.934970 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:23:26.935092 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:23:26.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.936668 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:23:26.936802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:23:26.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.938462 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:23:26.938585 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:23:26.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.940276 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:23:26.940429 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:23:26.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.942206 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:23:26.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.943903 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:23:26.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.945467 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:23:26.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.947151 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:23:26.957912 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:23:26.960438 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:23:26.961629 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:23:26.963047 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:23:26.965144 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:23:26.966448 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:23:26.967479 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:23:26.968790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:23:26.969630 systemd-journald[1083]: Time spent on flushing to /var/log/journal/7860f2aa453044208b93f14bf8ac2e33 is 18.175ms for 1112 entries. Jun 25 16:23:26.969630 systemd-journald[1083]: System Journal (/var/log/journal/7860f2aa453044208b93f14bf8ac2e33) is 8.0M, max 195.6M, 187.6M free. Jun 25 16:23:27.004847 systemd-journald[1083]: Received client request to flush runtime journal. Jun 25 16:23:26.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:27.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:26.970129 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:23:26.973085 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:23:26.976723 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:23:26.978399 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:23:27.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:27.007198 udevadm[1104]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 16:23:26.979869 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:23:26.981289 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:23:26.982922 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:23:26.989881 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:23:26.991316 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:23:26.993585 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:23:27.000885 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:23:27.005710 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:23:27.016361 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:23:27.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:27.530526 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:23:27.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:27.531000 audit: BPF prog-id=21 op=LOAD Jun 25 16:23:27.531000 audit: BPF prog-id=22 op=LOAD Jun 25 16:23:27.531000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:23:27.531000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:23:27.542984 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:23:27.560233 systemd-udevd[1109]: Using default interface naming scheme 'v252'. Jun 25 16:23:27.573918 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:23:27.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:27.575000 audit: BPF prog-id=23 op=LOAD Jun 25 16:23:27.577498 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:23:27.581000 audit: BPF prog-id=24 op=LOAD Jun 25 16:23:27.581000 audit: BPF prog-id=25 op=LOAD Jun 25 16:23:27.581000 audit: BPF prog-id=26 op=LOAD Jun 25 16:23:27.583949 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:23:27.602791 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 16:23:27.612173 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1111) Jun 25 16:23:27.612294 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1118) Jun 25 16:23:27.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:27.625958 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:23:27.655211 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:23:27.661613 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jun 25 16:23:27.661863 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 16:23:27.671766 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:23:27.684789 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 16:23:27.687784 systemd-networkd[1116]: lo: Link UP Jun 25 16:23:27.687790 systemd-networkd[1116]: lo: Gained carrier Jun 25 16:23:27.688138 systemd-networkd[1116]: Enumeration completed Jun 25 16:23:27.688238 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:23:27.688240 systemd-networkd[1116]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:23:27.688247 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:23:27.690661 systemd-networkd[1116]: eth0: Link UP Jun 25 16:23:27.690667 systemd-networkd[1116]: eth0: Gained carrier Jun 25 16:23:27.690677 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:23:27.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:27.693953 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:23:27.700764 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:23:27.704881 systemd-networkd[1116]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 16:23:27.795107 kernel: SVM: TSC scaling supported Jun 25 16:23:27.795184 kernel: kvm: Nested Virtualization enabled Jun 25 16:23:27.795217 kernel: SVM: kvm: Nested Paging enabled Jun 25 16:23:27.795231 kernel: SVM: Virtual VMLOAD VMSAVE supported Jun 25 16:23:27.796063 kernel: SVM: Virtual GIF supported Jun 25 16:23:27.796087 kernel: SVM: LBR virtualization supported Jun 25 16:23:27.813772 kernel: EDAC MC: Ver: 3.0.0 Jun 25 16:23:27.846397 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:23:27.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:27.859247 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:23:27.870153 lvm[1147]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:23:27.898344 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:23:27.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:27.900641 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:23:27.911112 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:23:27.915914 lvm[1148]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:23:27.952888 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:23:27.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:27.954202 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:23:27.955308 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:23:27.955328 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:23:27.956335 systemd[1]: Reached target machines.target - Containers. Jun 25 16:23:27.966909 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:23:27.968437 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:23:27.968497 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:23:27.969819 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:23:27.972354 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:23:27.975429 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:23:27.978227 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:23:27.978642 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1150 (bootctl) Jun 25 16:23:27.980341 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:23:27.988828 kernel: loop0: detected capacity change from 0 to 210664 Jun 25 16:23:27.989518 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:23:27.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:28.019868 systemd-fsck[1157]: fsck.fat 4.2 (2021-01-31) Jun 25 16:23:28.019868 systemd-fsck[1157]: /dev/vda1: 809 files, 120401/258078 clusters Jun 25 16:23:28.021480 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:23:28.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:28.037878 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:23:28.061780 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:23:28.264327 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:23:28.265047 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:23:28.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:28.267767 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:23:28.267795 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:23:28.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:28.292756 kernel: loop1: detected capacity change from 0 to 80584 Jun 25 16:23:28.320767 kernel: loop2: detected capacity change from 0 to 139360 Jun 25 16:23:28.346765 kernel: loop3: detected capacity change from 0 to 210664 Jun 25 16:23:28.355767 kernel: loop4: detected capacity change from 0 to 80584 Jun 25 16:23:28.360180 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:23:28.360766 kernel: loop5: detected capacity change from 0 to 139360 Jun 25 16:23:28.364170 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:23:28.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:28.367926 (sd-sysext)[1166]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 16:23:28.368364 (sd-sysext)[1166]: Merged extensions into '/usr'. Jun 25 16:23:28.369800 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:23:28.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:28.379920 systemd[1]: Starting ensure-sysext.service... Jun 25 16:23:28.382087 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:23:28.392974 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:23:28.394048 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:23:28.394247 systemd[1]: Reloading. Jun 25 16:23:28.394427 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:23:28.395231 systemd-tmpfiles[1168]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:23:28.520084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:23:28.585000 audit: BPF prog-id=27 op=LOAD Jun 25 16:23:28.585000 audit: BPF prog-id=24 op=UNLOAD Jun 25 16:23:28.585000 audit: BPF prog-id=28 op=LOAD Jun 25 16:23:28.585000 audit: BPF prog-id=29 op=LOAD Jun 25 16:23:28.585000 audit: BPF prog-id=25 op=UNLOAD Jun 25 16:23:28.585000 audit: BPF prog-id=26 op=UNLOAD Jun 25 16:23:28.588000 audit: BPF prog-id=30 op=LOAD Jun 25 16:23:28.588000 audit: BPF prog-id=23 op=UNLOAD Jun 25 16:23:28.588000 audit: BPF prog-id=31 op=LOAD Jun 25 16:23:28.588000 audit: BPF prog-id=32 op=LOAD Jun 25 16:23:28.588000 audit: BPF prog-id=21 op=UNLOAD Jun 25 16:23:28.588000 audit: BPF prog-id=22 op=UNLOAD Jun 25 16:23:28.589000 audit: BPF prog-id=33 op=LOAD Jun 25 16:23:28.589000 audit: BPF prog-id=18 op=UNLOAD Jun 25 16:23:28.589000 audit: BPF prog-id=34 op=LOAD Jun 25 16:23:28.589000 audit: BPF prog-id=35 op=LOAD Jun 25 16:23:28.589000 audit: BPF prog-id=19 op=UNLOAD Jun 25 16:23:28.589000 audit: BPF prog-id=20 op=UNLOAD Jun 25 16:23:28.591829 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:23:28.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:28.597047 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:23:28.599599 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:23:28.602589 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:23:28.604000 audit: BPF prog-id=36 op=LOAD Jun 25 16:23:28.605940 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:23:28.607000 audit: BPF prog-id=37 op=LOAD Jun 25 16:23:28.610082 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:23:28.612888 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:23:28.618839 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:23:28.619088 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:23:28.620939 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:23:28.622000 audit[1238]: SYSTEM_BOOT pid=1238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:23:28.624070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:23:28.627480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:23:28.629527 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:23:28.629680 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:23:28.629835 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:23:28.629000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:23:28.629000 audit[1248]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc018bb080 a2=420 a3=0 items=0 ppid=1225 pid=1248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:28.629000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:23:28.630430 augenrules[1248]: No rules Jun 25 16:23:28.631353 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:23:28.633255 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:23:28.633400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:23:28.635665 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:23:28.637513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:23:28.637661 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:23:28.639463 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:23:28.639611 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:23:28.646216 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:23:28.646963 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:23:28.668180 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:23:28.670851 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:23:29.684387 systemd-timesyncd[1235]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 16:23:29.684421 systemd-timesyncd[1235]: Initial clock synchronization to Tue 2024-06-25 16:23:29.684316 UTC. Jun 25 16:23:29.685195 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:23:29.685670 systemd-resolved[1234]: Positive Trust Anchors: Jun 25 16:23:29.685692 systemd-resolved[1234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:23:29.685721 systemd-resolved[1234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:23:29.686513 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:23:29.686694 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:23:29.688304 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:23:29.689336 systemd-resolved[1234]: Defaulting to hostname 'linux'. Jun 25 16:23:29.689490 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:23:29.690810 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:23:29.692758 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:23:29.694380 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:23:29.695963 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:23:29.696071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:23:29.697642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:23:29.697750 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:23:29.699302 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:23:29.699413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:23:29.701058 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:23:29.704787 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:23:29.707063 systemd[1]: Reached target network.target - Network. Jun 25 16:23:29.708048 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:23:29.709260 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:23:29.710324 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:23:29.710488 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:23:29.721135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:23:29.723565 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:23:29.725756 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:23:29.728070 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:23:29.729347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:23:29.729432 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:23:29.729511 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:23:29.729545 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:23:29.730181 systemd[1]: Finished ensure-sysext.service. Jun 25 16:23:29.731290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:23:29.731426 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:23:29.732987 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:23:29.733097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:23:29.734460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:23:29.734589 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:23:29.735927 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:23:29.736032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:23:29.737966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:23:29.737994 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:23:29.739171 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:23:29.740320 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:23:29.741858 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:23:29.743348 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:23:29.744673 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:23:29.745901 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:23:29.745940 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:23:29.746897 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:23:29.748334 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:23:29.750763 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:23:29.765099 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:23:29.766533 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:23:29.766600 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:23:29.767058 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:23:29.768296 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:23:29.769355 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:23:29.770414 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:23:29.770437 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:23:29.771459 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:23:29.773573 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:23:29.775580 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:23:29.777758 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:23:29.779055 jq[1266]: false Jun 25 16:23:29.779087 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:23:29.780205 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:23:29.782287 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:23:29.784675 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:23:29.786915 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:23:29.790005 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:23:29.791134 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:23:29.791182 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:23:29.791584 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:23:29.792362 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:23:29.794447 extend-filesystems[1267]: Found loop3 Jun 25 16:23:29.794447 extend-filesystems[1267]: Found loop4 Jun 25 16:23:29.794447 extend-filesystems[1267]: Found loop5 Jun 25 16:23:29.794447 extend-filesystems[1267]: Found sr0 Jun 25 16:23:29.794447 extend-filesystems[1267]: Found vda Jun 25 16:23:29.794447 extend-filesystems[1267]: Found vda1 Jun 25 16:23:29.794447 extend-filesystems[1267]: Found vda2 Jun 25 16:23:29.794447 extend-filesystems[1267]: Found vda3 Jun 25 16:23:29.818717 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1122) Jun 25 16:23:29.794117 dbus-daemon[1265]: [system] SELinux support is enabled Jun 25 16:23:29.818962 update_engine[1280]: I0625 16:23:29.813925 1280 main.cc:92] Flatcar Update Engine starting Jun 25 16:23:29.818962 update_engine[1280]: I0625 16:23:29.815183 1280 update_check_scheduler.cc:74] Next update check in 7m9s Jun 25 16:23:29.819127 extend-filesystems[1267]: Found usr Jun 25 16:23:29.819127 extend-filesystems[1267]: Found vda4 Jun 25 16:23:29.819127 extend-filesystems[1267]: Found vda6 Jun 25 16:23:29.819127 extend-filesystems[1267]: Found vda7 Jun 25 16:23:29.819127 extend-filesystems[1267]: Found vda9 Jun 25 16:23:29.819127 extend-filesystems[1267]: Checking size of /dev/vda9 Jun 25 16:23:29.819127 extend-filesystems[1267]: Resized partition /dev/vda9 Jun 25 16:23:29.835616 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 16:23:29.797514 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:23:29.835758 jq[1283]: true Jun 25 16:23:29.836001 extend-filesystems[1289]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 16:23:29.799526 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:23:29.805391 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:23:29.806392 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:23:29.837651 jq[1292]: true Jun 25 16:23:29.806708 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:23:29.806932 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:23:29.824201 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:23:29.824345 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:23:29.846343 systemd-logind[1278]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 16:23:29.846360 systemd-logind[1278]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:23:29.846857 tar[1290]: linux-amd64/helm Jun 25 16:23:29.846589 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:23:29.847096 systemd-logind[1278]: New seat seat0. Jun 25 16:23:29.849062 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:23:29.849092 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:23:29.851005 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:23:29.851028 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:23:29.854165 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:23:29.875988 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:23:29.883853 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 16:23:29.906030 locksmithd[1298]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:23:29.908054 extend-filesystems[1289]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 16:23:29.908054 extend-filesystems[1289]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 16:23:29.908054 extend-filesystems[1289]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 16:23:29.915986 extend-filesystems[1267]: Resized filesystem in /dev/vda9 Jun 25 16:23:29.910310 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:23:29.910534 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:23:29.918558 bash[1310]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:23:29.918883 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:23:29.920994 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 16:23:30.026221 containerd[1293]: time="2024-06-25T16:23:30.026131462Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:23:30.048470 containerd[1293]: time="2024-06-25T16:23:30.048417069Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:23:30.048470 containerd[1293]: time="2024-06-25T16:23:30.048470008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:23:30.049770 containerd[1293]: time="2024-06-25T16:23:30.049741572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:23:30.049800 containerd[1293]: time="2024-06-25T16:23:30.049770837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:23:30.050023 containerd[1293]: time="2024-06-25T16:23:30.050004034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:23:30.050049 containerd[1293]: time="2024-06-25T16:23:30.050023040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:23:30.050098 containerd[1293]: time="2024-06-25T16:23:30.050084245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:23:30.050151 containerd[1293]: time="2024-06-25T16:23:30.050126684Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:23:30.050151 containerd[1293]: time="2024-06-25T16:23:30.050136923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:23:30.050206 containerd[1293]: time="2024-06-25T16:23:30.050191706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:23:30.050383 containerd[1293]: time="2024-06-25T16:23:30.050352257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:23:30.050406 containerd[1293]: time="2024-06-25T16:23:30.050389066Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:23:30.050406 containerd[1293]: time="2024-06-25T16:23:30.050397792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:23:30.050510 containerd[1293]: time="2024-06-25T16:23:30.050493542Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:23:30.050537 containerd[1293]: time="2024-06-25T16:23:30.050509752Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:23:30.050566 containerd[1293]: time="2024-06-25T16:23:30.050552613Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:23:30.050592 containerd[1293]: time="2024-06-25T16:23:30.050565567Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:23:30.071262 containerd[1293]: time="2024-06-25T16:23:30.071220828Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:23:30.071262 containerd[1293]: time="2024-06-25T16:23:30.071264841Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:23:30.071391 containerd[1293]: time="2024-06-25T16:23:30.071276813Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:23:30.071391 containerd[1293]: time="2024-06-25T16:23:30.071312911Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:23:30.071391 containerd[1293]: time="2024-06-25T16:23:30.071325965Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:23:30.071391 containerd[1293]: time="2024-06-25T16:23:30.071336565Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:23:30.071391 containerd[1293]: time="2024-06-25T16:23:30.071347546Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:23:30.071531 containerd[1293]: time="2024-06-25T16:23:30.071513266Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:23:30.071557 containerd[1293]: time="2024-06-25T16:23:30.071535839Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:23:30.071557 containerd[1293]: time="2024-06-25T16:23:30.071546899Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:23:30.071594 containerd[1293]: time="2024-06-25T16:23:30.071559794Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:23:30.071594 containerd[1293]: time="2024-06-25T16:23:30.071572087Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:23:30.071594 containerd[1293]: time="2024-06-25T16:23:30.071587495Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:23:30.071645 containerd[1293]: time="2024-06-25T16:23:30.071598206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:23:30.071645 containerd[1293]: time="2024-06-25T16:23:30.071608455Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:23:30.071645 containerd[1293]: time="2024-06-25T16:23:30.071619536Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:23:30.071645 containerd[1293]: time="2024-06-25T16:23:30.071630767Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:23:30.071714 containerd[1293]: time="2024-06-25T16:23:30.071644693Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:23:30.071714 containerd[1293]: time="2024-06-25T16:23:30.071654481Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:23:30.071750 containerd[1293]: time="2024-06-25T16:23:30.071731475Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:23:30.072238 containerd[1293]: time="2024-06-25T16:23:30.072219350Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:23:30.072269 containerd[1293]: time="2024-06-25T16:23:30.072250749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072269 containerd[1293]: time="2024-06-25T16:23:30.072262531Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:23:30.072314 containerd[1293]: time="2024-06-25T16:23:30.072281326Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:23:30.072349 containerd[1293]: time="2024-06-25T16:23:30.072333183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072380 containerd[1293]: time="2024-06-25T16:23:30.072352349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072380 containerd[1293]: time="2024-06-25T16:23:30.072374431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072419 containerd[1293]: time="2024-06-25T16:23:30.072395140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072419 containerd[1293]: time="2024-06-25T16:23:30.072410739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072454 containerd[1293]: time="2024-06-25T16:23:30.072425797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072454 containerd[1293]: time="2024-06-25T16:23:30.072443140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072487 containerd[1293]: time="2024-06-25T16:23:30.072452567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072487 containerd[1293]: time="2024-06-25T16:23:30.072463267Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:23:30.072582 containerd[1293]: time="2024-06-25T16:23:30.072566340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072604 containerd[1293]: time="2024-06-25T16:23:30.072586548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072604 containerd[1293]: time="2024-06-25T16:23:30.072596537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072638 containerd[1293]: time="2024-06-25T16:23:30.072606716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072638 containerd[1293]: time="2024-06-25T16:23:30.072618849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072638 containerd[1293]: time="2024-06-25T16:23:30.072630160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072689 containerd[1293]: time="2024-06-25T16:23:30.072641702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072689 containerd[1293]: time="2024-06-25T16:23:30.072650138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:23:30.072944 containerd[1293]: time="2024-06-25T16:23:30.072896359Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:23:30.073088 containerd[1293]: time="2024-06-25T16:23:30.072950120Z" level=info msg="Connect containerd service" Jun 25 16:23:30.073088 containerd[1293]: time="2024-06-25T16:23:30.072970558Z" level=info msg="using legacy CRI server" Jun 25 16:23:30.073088 containerd[1293]: time="2024-06-25T16:23:30.072975688Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:23:30.073088 containerd[1293]: time="2024-06-25T16:23:30.072999703Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:23:30.073502 containerd[1293]: time="2024-06-25T16:23:30.073480775Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:23:30.074020 containerd[1293]: time="2024-06-25T16:23:30.074000659Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:23:30.074049 containerd[1293]: time="2024-06-25T16:23:30.074028141Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:23:30.074049 containerd[1293]: time="2024-06-25T16:23:30.074038059Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:23:30.074092 containerd[1293]: time="2024-06-25T16:23:30.074046605Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:23:30.074143 containerd[1293]: time="2024-06-25T16:23:30.074106187Z" level=info msg="Start subscribing containerd event" Jun 25 16:23:30.074183 containerd[1293]: time="2024-06-25T16:23:30.074170247Z" level=info msg="Start recovering state" Jun 25 16:23:30.074249 containerd[1293]: time="2024-06-25T16:23:30.074234758Z" level=info msg="Start event monitor" Jun 25 16:23:30.074271 containerd[1293]: time="2024-06-25T16:23:30.074251950Z" level=info msg="Start snapshots syncer" Jun 25 16:23:30.074271 containerd[1293]: time="2024-06-25T16:23:30.074262039Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:23:30.074304 containerd[1293]: time="2024-06-25T16:23:30.074271046Z" level=info msg="Start streaming server" Jun 25 16:23:30.074413 containerd[1293]: time="2024-06-25T16:23:30.074389338Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:23:30.074464 containerd[1293]: time="2024-06-25T16:23:30.074445864Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:23:30.074521 containerd[1293]: time="2024-06-25T16:23:30.074505406Z" level=info msg="containerd successfully booted in 0.053879s" Jun 25 16:23:30.074566 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:23:30.283643 tar[1290]: linux-amd64/LICENSE Jun 25 16:23:30.283643 tar[1290]: linux-amd64/README.md Jun 25 16:23:30.296642 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:23:30.298924 systemd-networkd[1116]: eth0: Gained IPv6LL Jun 25 16:23:30.300581 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:23:30.302053 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:23:30.304861 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 16:23:30.307466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:23:30.309477 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:23:30.316674 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 16:23:30.316822 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 16:23:30.318302 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:23:30.321630 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:23:30.335442 sshd_keygen[1284]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:23:30.363267 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:23:30.381478 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:23:30.390087 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:23:30.390304 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:23:30.394101 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:23:30.404781 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:23:30.407873 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:23:30.410732 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:23:30.412202 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:23:30.963477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:30.965221 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:23:30.967876 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:23:30.973718 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:23:30.973914 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:23:30.975543 systemd[1]: Startup finished in 751ms (kernel) + 6.207s (initrd) + 4.082s (userspace) = 11.042s. Jun 25 16:23:31.397730 kubelet[1352]: E0625 16:23:31.397608 1352 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:23:31.399664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:23:31.399779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:23:34.814425 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:23:34.815622 systemd[1]: Started sshd@0-10.0.0.104:22-10.0.0.1:35388.service - OpenSSH per-connection server daemon (10.0.0.1:35388). Jun 25 16:23:34.849641 sshd[1362]: Accepted publickey for core from 10.0.0.1 port 35388 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:23:34.851001 sshd[1362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:34.857767 systemd-logind[1278]: New session 1 of user core. Jun 25 16:23:34.858634 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:23:34.869094 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:23:34.877879 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:23:34.879502 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:23:34.882643 (systemd)[1365]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:34.962201 systemd[1365]: Queued start job for default target default.target. Jun 25 16:23:34.971144 systemd[1365]: Reached target paths.target - Paths. Jun 25 16:23:34.971168 systemd[1365]: Reached target sockets.target - Sockets. Jun 25 16:23:34.971182 systemd[1365]: Reached target timers.target - Timers. Jun 25 16:23:34.971195 systemd[1365]: Reached target basic.target - Basic System. Jun 25 16:23:34.971251 systemd[1365]: Reached target default.target - Main User Target. Jun 25 16:23:34.971283 systemd[1365]: Startup finished in 83ms. Jun 25 16:23:34.971347 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:23:34.972473 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:23:35.035493 systemd[1]: Started sshd@1-10.0.0.104:22-10.0.0.1:35390.service - OpenSSH per-connection server daemon (10.0.0.1:35390). Jun 25 16:23:35.068406 sshd[1374]: Accepted publickey for core from 10.0.0.1 port 35390 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:23:35.070563 sshd[1374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:35.074109 systemd-logind[1278]: New session 2 of user core. Jun 25 16:23:35.092046 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:23:35.147559 sshd[1374]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:35.160312 systemd[1]: sshd@1-10.0.0.104:22-10.0.0.1:35390.service: Deactivated successfully. Jun 25 16:23:35.160946 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 16:23:35.161508 systemd-logind[1278]: Session 2 logged out. Waiting for processes to exit. Jun 25 16:23:35.163202 systemd[1]: Started sshd@2-10.0.0.104:22-10.0.0.1:35402.service - OpenSSH per-connection server daemon (10.0.0.1:35402). Jun 25 16:23:35.163926 systemd-logind[1278]: Removed session 2. Jun 25 16:23:35.199695 sshd[1380]: Accepted publickey for core from 10.0.0.1 port 35402 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:23:35.201281 sshd[1380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:35.205602 systemd-logind[1278]: New session 3 of user core. Jun 25 16:23:35.225117 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:23:35.275377 sshd[1380]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:35.284328 systemd[1]: sshd@2-10.0.0.104:22-10.0.0.1:35402.service: Deactivated successfully. Jun 25 16:23:35.284957 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 16:23:35.285418 systemd-logind[1278]: Session 3 logged out. Waiting for processes to exit. Jun 25 16:23:35.286821 systemd[1]: Started sshd@3-10.0.0.104:22-10.0.0.1:35414.service - OpenSSH per-connection server daemon (10.0.0.1:35414). Jun 25 16:23:35.287610 systemd-logind[1278]: Removed session 3. Jun 25 16:23:35.320913 sshd[1386]: Accepted publickey for core from 10.0.0.1 port 35414 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:23:35.322407 sshd[1386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:35.326289 systemd-logind[1278]: New session 4 of user core. Jun 25 16:23:35.340056 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:23:35.396088 sshd[1386]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:35.413946 systemd[1]: sshd@3-10.0.0.104:22-10.0.0.1:35414.service: Deactivated successfully. Jun 25 16:23:35.414649 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:23:35.415258 systemd-logind[1278]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:23:35.416789 systemd[1]: Started sshd@4-10.0.0.104:22-10.0.0.1:35428.service - OpenSSH per-connection server daemon (10.0.0.1:35428). Jun 25 16:23:35.417676 systemd-logind[1278]: Removed session 4. Jun 25 16:23:35.449988 sshd[1392]: Accepted publickey for core from 10.0.0.1 port 35428 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:23:35.451319 sshd[1392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:35.457683 systemd-logind[1278]: New session 5 of user core. Jun 25 16:23:35.465991 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:23:35.524964 sudo[1395]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:23:35.525262 sudo[1395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:23:35.541454 sudo[1395]: pam_unix(sudo:session): session closed for user root Jun 25 16:23:35.543433 sshd[1392]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:35.553998 systemd[1]: sshd@4-10.0.0.104:22-10.0.0.1:35428.service: Deactivated successfully. Jun 25 16:23:35.554544 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:23:35.555123 systemd-logind[1278]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:23:35.556497 systemd[1]: Started sshd@5-10.0.0.104:22-10.0.0.1:35434.service - OpenSSH per-connection server daemon (10.0.0.1:35434). Jun 25 16:23:35.557167 systemd-logind[1278]: Removed session 5. Jun 25 16:23:35.588498 sshd[1399]: Accepted publickey for core from 10.0.0.1 port 35434 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:23:35.589916 sshd[1399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:35.593154 systemd-logind[1278]: New session 6 of user core. Jun 25 16:23:35.611117 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:23:35.663297 sudo[1403]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:23:35.663501 sudo[1403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:23:35.666360 sudo[1403]: pam_unix(sudo:session): session closed for user root Jun 25 16:23:35.670220 sudo[1402]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:23:35.670412 sudo[1402]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:23:35.696182 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:23:35.696000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:23:35.697767 auditctl[1406]: No rules Jun 25 16:23:35.698117 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:23:35.698356 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:23:35.700559 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:23:35.718339 augenrules[1423]: No rules Jun 25 16:23:35.718870 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:23:35.724972 kernel: kauditd_printk_skb: 137 callbacks suppressed Jun 25 16:23:35.725084 kernel: audit: type=1305 audit(1719332615.696:178): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:23:35.725232 sudo[1402]: pam_unix(sudo:session): session closed for user root Jun 25 16:23:35.696000 audit[1406]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb6b33100 a2=420 a3=0 items=0 ppid=1 pid=1406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:35.728261 kernel: audit: type=1300 audit(1719332615.696:178): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb6b33100 a2=420 a3=0 items=0 ppid=1 pid=1406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:35.727222 sshd[1399]: pam_unix(sshd:session): session closed for user core Jun 25 16:23:35.730270 systemd[1]: sshd@5-10.0.0.104:22-10.0.0.1:35434.service: Deactivated successfully. Jun 25 16:23:35.730811 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:23:35.731271 systemd-logind[1278]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:23:35.696000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:23:35.732612 systemd[1]: Started sshd@6-10.0.0.104:22-10.0.0.1:35438.service - OpenSSH per-connection server daemon (10.0.0.1:35438). Jun 25 16:23:35.733283 systemd-logind[1278]: Removed session 6. Jun 25 16:23:35.733614 kernel: audit: type=1327 audit(1719332615.696:178): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:23:35.733657 kernel: audit: type=1131 audit(1719332615.697:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.740685 kernel: audit: type=1130 audit(1719332615.718:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.724000 audit[1402]: USER_END pid=1402 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.724000 audit[1402]: CRED_DISP pid=1402 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.748717 kernel: audit: type=1106 audit(1719332615.724:181): pid=1402 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.748818 kernel: audit: type=1104 audit(1719332615.724:182): pid=1402 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.748854 kernel: audit: type=1106 audit(1719332615.727:183): pid=1399 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:35.727000 audit[1399]: USER_END pid=1399 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:35.727000 audit[1399]: CRED_DISP pid=1399 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:35.757236 kernel: audit: type=1104 audit(1719332615.727:184): pid=1399 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:35.757287 kernel: audit: type=1131 audit(1719332615.729:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.104:22-10.0.0.1:35434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.104:22-10.0.0.1:35434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.104:22-10.0.0.1:35438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.764000 audit[1429]: USER_ACCT pid=1429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:35.764973 sshd[1429]: Accepted publickey for core from 10.0.0.1 port 35438 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:23:35.764000 audit[1429]: CRED_ACQ pid=1429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:35.764000 audit[1429]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd621a6410 a2=3 a3=7fd3d5aec480 items=0 ppid=1 pid=1429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:35.764000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:23:35.766008 sshd[1429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:23:35.769272 systemd-logind[1278]: New session 7 of user core. Jun 25 16:23:35.780130 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:23:35.783000 audit[1429]: USER_START pid=1429 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:35.785000 audit[1431]: CRED_ACQ pid=1431 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:23:35.831000 audit[1432]: USER_ACCT pid=1432 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.831000 audit[1432]: CRED_REFR pid=1432 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.832596 sudo[1432]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:23:35.832821 sudo[1432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:23:35.834000 audit[1432]: USER_START pid=1432 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:23:35.930160 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:23:36.155759 dockerd[1443]: time="2024-06-25T16:23:36.155699753Z" level=info msg="Starting up" Jun 25 16:23:38.204115 dockerd[1443]: time="2024-06-25T16:23:38.204053264Z" level=info msg="Loading containers: start." Jun 25 16:23:38.252000 audit[1478]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:38.252000 audit[1478]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff6ec5bac0 a2=0 a3=7f4b58927e90 items=0 ppid=1443 pid=1478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:38.252000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:23:38.254000 audit[1480]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:38.254000 audit[1480]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd94e068e0 a2=0 a3=7f4af6607e90 items=0 ppid=1443 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:38.254000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:23:38.256000 audit[1482]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:38.256000 audit[1482]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff6d050350 a2=0 a3=7f2602690e90 items=0 ppid=1443 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:38.256000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:23:38.257000 audit[1484]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:38.257000 audit[1484]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe844bf2b0 a2=0 a3=7f9db6ecce90 items=0 ppid=1443 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:38.257000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:23:38.260000 audit[1486]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:38.260000 audit[1486]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd08b9ca00 a2=0 a3=7f2210e6de90 items=0 ppid=1443 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:38.260000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:23:38.261000 audit[1488]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:38.261000 audit[1488]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdb7f44b30 a2=0 a3=7f073a87de90 items=0 ppid=1443 pid=1488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:38.261000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:23:38.706000 audit[1490]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:38.706000 audit[1490]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd7fc4a1d0 a2=0 a3=7f928171ae90 items=0 ppid=1443 pid=1490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:38.706000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:23:38.708000 audit[1492]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:38.708000 audit[1492]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd7b1115e0 a2=0 a3=7fb21ae74e90 items=0 ppid=1443 pid=1492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:38.708000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:23:38.709000 audit[1494]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:38.709000 audit[1494]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd2d5834a0 a2=0 a3=7fb2a9effe90 items=0 ppid=1443 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:38.709000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:23:39.337000 audit[1498]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.337000 audit[1498]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffea5cc6f90 a2=0 a3=7f9929bf7e90 items=0 ppid=1443 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.337000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:23:39.338000 audit[1499]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.338000 audit[1499]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe8c8b4440 a2=0 a3=7f3ed2678e90 items=0 ppid=1443 pid=1499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.338000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:23:39.346839 kernel: Initializing XFRM netlink socket Jun 25 16:23:39.373000 audit[1509]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.373000 audit[1509]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd6864b8e0 a2=0 a3=7f8ae6594e90 items=0 ppid=1443 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.373000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:23:39.390000 audit[1512]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.390000 audit[1512]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff1f3e4cb0 a2=0 a3=7ffacf8bae90 items=0 ppid=1443 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.390000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:23:39.393000 audit[1516]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.393000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcd70268b0 a2=0 a3=7fa14846ce90 items=0 ppid=1443 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.393000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:23:39.395000 audit[1518]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.395000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffeb43e67a0 a2=0 a3=7efe05815e90 items=0 ppid=1443 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.395000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:23:39.397000 audit[1520]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.397000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff10e5a310 a2=0 a3=7f29a8ca4e90 items=0 ppid=1443 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.397000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:23:39.399000 audit[1522]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.399000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd69f3ea50 a2=0 a3=7f97fb3d5e90 items=0 ppid=1443 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.399000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:23:39.401000 audit[1524]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.401000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fffe8e4cfe0 a2=0 a3=7f9112da1e90 items=0 ppid=1443 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.401000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:23:39.405000 audit[1527]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.405000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff061208c0 a2=0 a3=7f01f6e42e90 items=0 ppid=1443 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.405000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:23:39.407000 audit[1529]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.407000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe99f22de0 a2=0 a3=7f90ef474e90 items=0 ppid=1443 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.407000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:23:39.409000 audit[1531]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.409000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe969d0b40 a2=0 a3=7f2eccd63e90 items=0 ppid=1443 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.409000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:23:39.411000 audit[1533]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.411000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffde8d97040 a2=0 a3=7f635a443e90 items=0 ppid=1443 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.411000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:23:39.412375 systemd-networkd[1116]: docker0: Link UP Jun 25 16:23:39.729000 audit[1537]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.729000 audit[1537]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffd6dd10c0 a2=0 a3=7fa447926e90 items=0 ppid=1443 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.729000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:23:39.730000 audit[1538]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:23:39.730000 audit[1538]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe92a50ad0 a2=0 a3=7fccf4ae5e90 items=0 ppid=1443 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:23:39.730000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:23:39.731229 dockerd[1443]: time="2024-06-25T16:23:39.731194078Z" level=info msg="Loading containers: done." Jun 25 16:23:39.776247 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2714541178-merged.mount: Deactivated successfully. Jun 25 16:23:40.031779 dockerd[1443]: time="2024-06-25T16:23:40.031654123Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:23:40.032006 dockerd[1443]: time="2024-06-25T16:23:40.031979253Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:23:40.032139 dockerd[1443]: time="2024-06-25T16:23:40.032113084Z" level=info msg="Daemon has completed initialization" Jun 25 16:23:40.605301 dockerd[1443]: time="2024-06-25T16:23:40.605201009Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:23:40.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:40.605912 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:23:41.650598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:23:41.650843 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:41.660859 kernel: kauditd_printk_skb: 84 callbacks suppressed Jun 25 16:23:41.660935 kernel: audit: type=1130 audit(1719332621.650:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:41.660959 kernel: audit: type=1131 audit(1719332621.650:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:41.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:41.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:41.900230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:23:41.932887 containerd[1293]: time="2024-06-25T16:23:41.932839016Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 16:23:42.098593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:42.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:42.101865 kernel: audit: type=1130 audit(1719332622.097:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:42.286265 kubelet[1589]: E0625 16:23:42.286120 1589 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:23:42.289192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:23:42.289312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:23:42.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:23:42.292864 kernel: audit: type=1131 audit(1719332622.288:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:23:43.272372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3635708917.mount: Deactivated successfully. Jun 25 16:23:45.090142 containerd[1293]: time="2024-06-25T16:23:45.090065047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:45.091025 containerd[1293]: time="2024-06-25T16:23:45.090977628Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jun 25 16:23:45.095838 containerd[1293]: time="2024-06-25T16:23:45.095772667Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:45.098191 containerd[1293]: time="2024-06-25T16:23:45.098129305Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:45.100331 containerd[1293]: time="2024-06-25T16:23:45.100292951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:45.101394 containerd[1293]: time="2024-06-25T16:23:45.101347167Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 3.168456544s" Jun 25 16:23:45.101394 containerd[1293]: time="2024-06-25T16:23:45.101387923Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jun 25 16:23:45.124635 containerd[1293]: time="2024-06-25T16:23:45.124593045Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 16:23:47.581241 containerd[1293]: time="2024-06-25T16:23:47.581164308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:47.582370 containerd[1293]: time="2024-06-25T16:23:47.582273628Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jun 25 16:23:47.584030 containerd[1293]: time="2024-06-25T16:23:47.583993682Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:47.586341 containerd[1293]: time="2024-06-25T16:23:47.586300978Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:47.589056 containerd[1293]: time="2024-06-25T16:23:47.588967326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:47.590230 containerd[1293]: time="2024-06-25T16:23:47.590166905Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 2.465529177s" Jun 25 16:23:47.590313 containerd[1293]: time="2024-06-25T16:23:47.590228180Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jun 25 16:23:47.619036 containerd[1293]: time="2024-06-25T16:23:47.618975952Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 16:23:49.415155 containerd[1293]: time="2024-06-25T16:23:49.415076899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:49.416291 containerd[1293]: time="2024-06-25T16:23:49.416234470Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jun 25 16:23:49.418215 containerd[1293]: time="2024-06-25T16:23:49.417874083Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:49.420475 containerd[1293]: time="2024-06-25T16:23:49.420425496Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:49.423376 containerd[1293]: time="2024-06-25T16:23:49.423288784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:49.424407 containerd[1293]: time="2024-06-25T16:23:49.424363238Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.805342222s" Jun 25 16:23:49.424465 containerd[1293]: time="2024-06-25T16:23:49.424408353Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jun 25 16:23:49.448961 containerd[1293]: time="2024-06-25T16:23:49.448904454Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 16:23:51.780656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682891864.mount: Deactivated successfully. Jun 25 16:23:52.540343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:23:52.540574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:52.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:52.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:52.545849 kernel: audit: type=1130 audit(1719332632.539:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:52.545901 kernel: audit: type=1131 audit(1719332632.539:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:52.551093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:23:52.632989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:23:52.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:52.636858 kernel: audit: type=1130 audit(1719332632.632:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:52.855236 kubelet[1691]: E0625 16:23:52.854911 1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:23:52.857005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:23:52.857119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:23:52.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:23:52.860858 kernel: audit: type=1131 audit(1719332632.856:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:23:53.607393 containerd[1293]: time="2024-06-25T16:23:53.607334405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:53.611039 containerd[1293]: time="2024-06-25T16:23:53.611000598Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jun 25 16:23:53.614911 containerd[1293]: time="2024-06-25T16:23:53.614882315Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:53.618619 containerd[1293]: time="2024-06-25T16:23:53.618590997Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:53.621450 containerd[1293]: time="2024-06-25T16:23:53.621390806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:53.621996 containerd[1293]: time="2024-06-25T16:23:53.621954132Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 4.172995286s" Jun 25 16:23:53.621996 containerd[1293]: time="2024-06-25T16:23:53.621991382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jun 25 16:23:53.644313 containerd[1293]: time="2024-06-25T16:23:53.644273432Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 16:23:54.350930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1749823424.mount: Deactivated successfully. Jun 25 16:23:55.980962 containerd[1293]: time="2024-06-25T16:23:55.980906305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:55.982130 containerd[1293]: time="2024-06-25T16:23:55.982083632Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jun 25 16:23:55.983710 containerd[1293]: time="2024-06-25T16:23:55.983657141Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:55.985759 containerd[1293]: time="2024-06-25T16:23:55.985733494Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:55.987658 containerd[1293]: time="2024-06-25T16:23:55.987613568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:55.988630 containerd[1293]: time="2024-06-25T16:23:55.988593596Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.344276652s" Jun 25 16:23:55.988702 containerd[1293]: time="2024-06-25T16:23:55.988632508Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jun 25 16:23:56.008560 containerd[1293]: time="2024-06-25T16:23:56.008515572Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:23:56.453786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2836621790.mount: Deactivated successfully. Jun 25 16:23:56.464140 containerd[1293]: time="2024-06-25T16:23:56.464067479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:56.464903 containerd[1293]: time="2024-06-25T16:23:56.464819910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 16:23:56.466153 containerd[1293]: time="2024-06-25T16:23:56.466110128Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:56.467910 containerd[1293]: time="2024-06-25T16:23:56.467877201Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:56.469758 containerd[1293]: time="2024-06-25T16:23:56.469652439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:23:56.470354 containerd[1293]: time="2024-06-25T16:23:56.470314660Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 461.754224ms" Jun 25 16:23:56.470354 containerd[1293]: time="2024-06-25T16:23:56.470346029Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:23:56.494256 containerd[1293]: time="2024-06-25T16:23:56.494210737Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 16:23:57.020365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056126052.mount: Deactivated successfully. Jun 25 16:24:00.123420 containerd[1293]: time="2024-06-25T16:24:00.123338465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:00.124378 containerd[1293]: time="2024-06-25T16:24:00.124296301Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jun 25 16:24:00.129477 containerd[1293]: time="2024-06-25T16:24:00.129420377Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:00.142224 containerd[1293]: time="2024-06-25T16:24:00.142145732Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:00.144767 containerd[1293]: time="2024-06-25T16:24:00.144698909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:00.145944 containerd[1293]: time="2024-06-25T16:24:00.145887066Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.651467717s" Jun 25 16:24:00.146052 containerd[1293]: time="2024-06-25T16:24:00.145948511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jun 25 16:24:03.071141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 16:24:03.071359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:03.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:03.077362 kernel: audit: type=1130 audit(1719332643.070:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:03.077441 kernel: audit: type=1131 audit(1719332643.070:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:03.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:03.083207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:03.206044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:03.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:03.209854 kernel: audit: type=1130 audit(1719332643.205:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:03.253859 kubelet[1888]: E0625 16:24:03.253797 1888 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:24:03.256314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:24:03.256437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:24:03.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:24:03.267858 kernel: audit: type=1131 audit(1719332643.255:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:24:03.827441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:03.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:03.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:03.830849 kernel: audit: type=1130 audit(1719332643.826:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:03.830896 kernel: audit: type=1131 audit(1719332643.829:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:03.836136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:03.856421 systemd[1]: Reloading. Jun 25 16:24:04.542037 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:24:04.599000 audit: BPF prog-id=41 op=LOAD Jun 25 16:24:04.599000 audit: BPF prog-id=27 op=UNLOAD Jun 25 16:24:04.602269 kernel: audit: type=1334 audit(1719332644.599:234): prog-id=41 op=LOAD Jun 25 16:24:04.602329 kernel: audit: type=1334 audit(1719332644.599:235): prog-id=27 op=UNLOAD Jun 25 16:24:04.602348 kernel: audit: type=1334 audit(1719332644.599:236): prog-id=42 op=LOAD Jun 25 16:24:04.599000 audit: BPF prog-id=42 op=LOAD Jun 25 16:24:04.599000 audit: BPF prog-id=43 op=LOAD Jun 25 16:24:04.604844 kernel: audit: type=1334 audit(1719332644.599:237): prog-id=43 op=LOAD Jun 25 16:24:04.599000 audit: BPF prog-id=28 op=UNLOAD Jun 25 16:24:04.599000 audit: BPF prog-id=29 op=UNLOAD Jun 25 16:24:04.601000 audit: BPF prog-id=44 op=LOAD Jun 25 16:24:04.601000 audit: BPF prog-id=37 op=UNLOAD Jun 25 16:24:04.603000 audit: BPF prog-id=45 op=LOAD Jun 25 16:24:04.603000 audit: BPF prog-id=36 op=UNLOAD Jun 25 16:24:04.604000 audit: BPF prog-id=46 op=LOAD Jun 25 16:24:04.604000 audit: BPF prog-id=30 op=UNLOAD Jun 25 16:24:04.604000 audit: BPF prog-id=47 op=LOAD Jun 25 16:24:04.604000 audit: BPF prog-id=48 op=LOAD Jun 25 16:24:04.604000 audit: BPF prog-id=31 op=UNLOAD Jun 25 16:24:04.604000 audit: BPF prog-id=32 op=UNLOAD Jun 25 16:24:04.605000 audit: BPF prog-id=49 op=LOAD Jun 25 16:24:04.605000 audit: BPF prog-id=33 op=UNLOAD Jun 25 16:24:04.605000 audit: BPF prog-id=50 op=LOAD Jun 25 16:24:04.605000 audit: BPF prog-id=51 op=LOAD Jun 25 16:24:04.605000 audit: BPF prog-id=34 op=UNLOAD Jun 25 16:24:04.605000 audit: BPF prog-id=35 op=UNLOAD Jun 25 16:24:04.606000 audit: BPF prog-id=52 op=LOAD Jun 25 16:24:04.606000 audit: BPF prog-id=38 op=UNLOAD Jun 25 16:24:04.606000 audit: BPF prog-id=53 op=LOAD Jun 25 16:24:04.606000 audit: BPF prog-id=54 op=LOAD Jun 25 16:24:04.606000 audit: BPF prog-id=39 op=UNLOAD Jun 25 16:24:04.606000 audit: BPF prog-id=40 op=UNLOAD Jun 25 16:24:04.623102 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 16:24:04.623177 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 16:24:04.623425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:04.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:24:04.624951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:04.722321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:04.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:04.772757 kubelet[1964]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:24:04.772757 kubelet[1964]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:24:04.772757 kubelet[1964]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:24:04.773142 kubelet[1964]: I0625 16:24:04.772809 1964 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:24:05.161967 kubelet[1964]: I0625 16:24:05.161922 1964 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 16:24:05.161967 kubelet[1964]: I0625 16:24:05.161953 1964 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:24:05.162216 kubelet[1964]: I0625 16:24:05.162194 1964 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 16:24:05.187610 kubelet[1964]: I0625 16:24:05.187578 1964 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:24:05.188377 kubelet[1964]: E0625 16:24:05.188349 1964 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:05.245672 kubelet[1964]: I0625 16:24:05.245634 1964 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:24:05.250316 kubelet[1964]: I0625 16:24:05.250255 1964 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:24:05.250547 kubelet[1964]: I0625 16:24:05.250304 1964 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:24:05.251139 kubelet[1964]: I0625 16:24:05.251115 1964 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:24:05.251139 kubelet[1964]: I0625 16:24:05.251133 1964 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:24:05.251273 kubelet[1964]: I0625 16:24:05.251252 1964 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:24:05.252065 kubelet[1964]: I0625 16:24:05.252045 1964 kubelet.go:400] "Attempting to sync node with API server" Jun 25 16:24:05.252065 kubelet[1964]: I0625 16:24:05.252064 1964 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:24:05.252131 kubelet[1964]: I0625 16:24:05.252094 1964 kubelet.go:312] "Adding apiserver pod source" Jun 25 16:24:05.252131 kubelet[1964]: I0625 16:24:05.252115 1964 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:24:05.252585 kubelet[1964]: W0625 16:24:05.252505 1964 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:05.252585 kubelet[1964]: E0625 16:24:05.252581 1964 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:05.252768 kubelet[1964]: W0625 16:24:05.252712 1964 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:05.252768 kubelet[1964]: E0625 16:24:05.252769 1964 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:05.258112 kubelet[1964]: I0625 16:24:05.258074 1964 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:24:05.261810 kubelet[1964]: I0625 16:24:05.261774 1964 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 16:24:05.261927 kubelet[1964]: W0625 16:24:05.261863 1964 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:24:05.262600 kubelet[1964]: I0625 16:24:05.262553 1964 server.go:1264] "Started kubelet" Jun 25 16:24:05.263125 kubelet[1964]: I0625 16:24:05.262646 1964 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:24:05.263845 kubelet[1964]: I0625 16:24:05.263802 1964 server.go:455] "Adding debug handlers to kubelet server" Jun 25 16:24:05.263922 kubelet[1964]: I0625 16:24:05.263857 1964 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 16:24:05.264305 kubelet[1964]: I0625 16:24:05.264149 1964 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:24:05.265873 kubelet[1964]: I0625 16:24:05.265839 1964 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:24:05.266936 kubelet[1964]: E0625 16:24:05.266228 1964 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:24:05.266936 kubelet[1964]: I0625 16:24:05.266289 1964 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:24:05.266936 kubelet[1964]: I0625 16:24:05.266396 1964 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 16:24:05.266936 kubelet[1964]: I0625 16:24:05.266449 1964 reconciler.go:26] "Reconciler: start to sync state" Jun 25 16:24:05.267847 kubelet[1964]: W0625 16:24:05.267707 1964 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:05.267847 kubelet[1964]: E0625 16:24:05.267757 1964 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:05.267973 kubelet[1964]: E0625 16:24:05.267908 1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="200ms" Jun 25 16:24:05.268378 kubelet[1964]: I0625 16:24:05.268350 1964 factory.go:221] Registration of the systemd container factory successfully Jun 25 16:24:05.267000 audit[1976]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:05.267000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffea2523640 a2=0 a3=7f1ec6db0e90 items=0 ppid=1964 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:24:05.277481 kubelet[1964]: I0625 16:24:05.277403 1964 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 16:24:05.278161 kubelet[1964]: E0625 16:24:05.278140 1964 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:24:05.278000 audit[1977]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:05.278000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff30c8c5f0 a2=0 a3=7f3054adee90 items=0 ppid=1964 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:24:05.280132 kubelet[1964]: I0625 16:24:05.280080 1964 factory.go:221] Registration of the containerd container factory successfully Jun 25 16:24:05.280443 kubelet[1964]: E0625 16:24:05.280315 1964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.104:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17dc4bee739ae936 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-06-25 16:24:05.262518582 +0000 UTC m=+0.536105731,LastTimestamp:2024-06-25 16:24:05.262518582 +0000 UTC m=+0.536105731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 25 16:24:05.280000 audit[1979]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:05.280000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc50d098d0 a2=0 a3=7fd584a60e90 items=0 ppid=1964 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:24:05.285000 audit[1984]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:05.285000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe027d1360 a2=0 a3=7f3e3c9ade90 items=0 ppid=1964 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.285000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:24:05.291000 audit[1989]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:05.291000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc0b1bfb30 a2=0 a3=7fd1afdc5e90 items=0 ppid=1964 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.291000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:24:05.292665 kubelet[1964]: I0625 16:24:05.292629 1964 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:24:05.292000 audit[1991]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:05.292000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd9847d9f0 a2=0 a3=7f40adf4ce90 items=0 ppid=1964 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.292000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:24:05.293968 kubelet[1964]: I0625 16:24:05.293943 1964 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:24:05.294002 kubelet[1964]: I0625 16:24:05.293972 1964 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:24:05.294002 kubelet[1964]: I0625 16:24:05.293991 1964 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:24:05.294242 kubelet[1964]: I0625 16:24:05.294222 1964 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:24:05.294277 kubelet[1964]: I0625 16:24:05.294257 1964 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:24:05.294298 kubelet[1964]: I0625 16:24:05.294279 1964 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 16:24:05.294346 kubelet[1964]: E0625 16:24:05.294317 1964 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:24:05.293000 audit[1992]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:05.293000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff04ebc9c0 a2=0 a3=7f9b0f866e90 items=0 ppid=1964 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:24:05.295062 kubelet[1964]: W0625 16:24:05.295024 1964 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:05.295158 kubelet[1964]: E0625 16:24:05.295147 1964 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:05.294000 audit[1993]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:05.294000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd022b2a0 a2=0 a3=7fd8fbebfe90 items=0 ppid=1964 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:24:05.294000 audit[1994]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:05.294000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdda205e0 a2=0 a3=7f8abefbee90 items=0 ppid=1964 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.294000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:24:05.295000 audit[1995]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:05.295000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd175e53c0 a2=0 a3=7ffad1699e90 items=0 ppid=1964 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.295000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:24:05.295000 audit[1996]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:05.295000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffebf825980 a2=0 a3=7f066cbace90 items=0 ppid=1964 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.295000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:24:05.296000 audit[1997]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:05.296000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff64364300 a2=0 a3=7f4cb9f88e90 items=0 ppid=1964 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:05.296000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:24:05.368268 kubelet[1964]: I0625 16:24:05.368229 1964 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:24:05.368757 kubelet[1964]: E0625 16:24:05.368711 1964 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jun 25 16:24:05.395091 kubelet[1964]: E0625 16:24:05.395040 1964 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:24:05.469065 kubelet[1964]: E0625 16:24:05.468930 1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="400ms" Jun 25 16:24:05.571103 kubelet[1964]: I0625 16:24:05.571060 1964 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:24:05.571544 kubelet[1964]: E0625 16:24:05.571497 1964 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jun 25 16:24:05.595685 kubelet[1964]: E0625 16:24:05.595621 1964 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:24:05.850183 kubelet[1964]: I0625 16:24:05.850130 1964 policy_none.go:49] "None policy: Start" Jun 25 16:24:05.850910 kubelet[1964]: I0625 16:24:05.850881 1964 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 16:24:05.850910 kubelet[1964]: I0625 16:24:05.850903 1964 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:24:05.869751 kubelet[1964]: E0625 16:24:05.869680 1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="800ms" Jun 25 16:24:05.969659 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 16:24:05.973198 kubelet[1964]: I0625 16:24:05.973166 1964 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:24:05.973605 kubelet[1964]: E0625 16:24:05.973557 1964 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jun 25 16:24:05.995711 kubelet[1964]: E0625 16:24:05.995678 1964 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:24:05.995721 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 16:24:05.998956 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 16:24:06.008399 kubelet[1964]: I0625 16:24:06.008378 1964 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:24:06.008640 kubelet[1964]: I0625 16:24:06.008555 1964 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 16:24:06.008702 kubelet[1964]: I0625 16:24:06.008671 1964 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:24:06.009797 kubelet[1964]: E0625 16:24:06.009772 1964 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 16:24:06.090378 kubelet[1964]: W0625 16:24:06.090300 1964 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:06.090378 kubelet[1964]: E0625 16:24:06.090356 1964 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:06.345357 kubelet[1964]: W0625 16:24:06.345294 1964 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:06.345357 kubelet[1964]: E0625 16:24:06.345361 1964 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:06.421341 kubelet[1964]: W0625 16:24:06.421288 1964 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:06.421341 kubelet[1964]: E0625 16:24:06.421338 1964 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:06.514433 kubelet[1964]: W0625 16:24:06.514345 1964 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:06.514433 kubelet[1964]: E0625 16:24:06.514423 1964 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:06.670687 kubelet[1964]: E0625 16:24:06.670528 1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="1.6s" Jun 25 16:24:06.775702 kubelet[1964]: I0625 16:24:06.775676 1964 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:24:06.776101 kubelet[1964]: E0625 16:24:06.776058 1964 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jun 25 16:24:06.796344 kubelet[1964]: I0625 16:24:06.796269 1964 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:24:06.797409 kubelet[1964]: I0625 16:24:06.797376 1964 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:24:06.798249 kubelet[1964]: I0625 16:24:06.798234 1964 topology_manager.go:215] "Topology Admit Handler" podUID="9fdc2ce3a3af715490ad36b8ba15f6a8" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:24:06.803117 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jun 25 16:24:06.817659 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jun 25 16:24:06.835594 systemd[1]: Created slice kubepods-burstable-pod9fdc2ce3a3af715490ad36b8ba15f6a8.slice - libcontainer container kubepods-burstable-pod9fdc2ce3a3af715490ad36b8ba15f6a8.slice. Jun 25 16:24:06.874317 kubelet[1964]: I0625 16:24:06.874270 1964 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:24:06.874317 kubelet[1964]: I0625 16:24:06.874306 1964 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:24:06.874597 kubelet[1964]: I0625 16:24:06.874322 1964 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:24:06.874597 kubelet[1964]: I0625 16:24:06.874341 1964 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:24:06.874597 kubelet[1964]: I0625 16:24:06.874357 1964 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:24:06.874597 kubelet[1964]: I0625 16:24:06.874372 1964 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fdc2ce3a3af715490ad36b8ba15f6a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9fdc2ce3a3af715490ad36b8ba15f6a8\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:24:06.874597 kubelet[1964]: I0625 16:24:06.874385 1964 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fdc2ce3a3af715490ad36b8ba15f6a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9fdc2ce3a3af715490ad36b8ba15f6a8\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:24:06.874713 kubelet[1964]: I0625 16:24:06.874408 1964 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fdc2ce3a3af715490ad36b8ba15f6a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9fdc2ce3a3af715490ad36b8ba15f6a8\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:24:06.874713 kubelet[1964]: I0625 16:24:06.874423 1964 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:24:07.115030 kubelet[1964]: E0625 16:24:07.114975 1964 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:07.115697 containerd[1293]: time="2024-06-25T16:24:07.115632424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jun 25 16:24:07.134848 kubelet[1964]: E0625 16:24:07.134762 1964 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:07.135160 containerd[1293]: time="2024-06-25T16:24:07.135109233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jun 25 16:24:07.137461 kubelet[1964]: E0625 16:24:07.137415 1964 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:07.137845 containerd[1293]: time="2024-06-25T16:24:07.137793162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9fdc2ce3a3af715490ad36b8ba15f6a8,Namespace:kube-system,Attempt:0,}" Jun 25 16:24:07.231713 kubelet[1964]: E0625 16:24:07.231669 1964 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:08.026941 kubelet[1964]: W0625 16:24:08.026872 1964 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:08.026941 kubelet[1964]: E0625 16:24:08.026931 1964 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:08.069751 kubelet[1964]: W0625 16:24:08.069690 1964 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:08.069751 kubelet[1964]: E0625 16:24:08.069753 1964 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:08.271154 kubelet[1964]: E0625 16:24:08.271106 1964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="3.2s" Jun 25 16:24:08.377839 kubelet[1964]: I0625 16:24:08.377784 1964 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:24:08.378143 kubelet[1964]: E0625 16:24:08.378121 1964 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jun 25 16:24:08.432844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2783585855.mount: Deactivated successfully. Jun 25 16:24:08.440153 containerd[1293]: time="2024-06-25T16:24:08.440102324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:08.441212 containerd[1293]: time="2024-06-25T16:24:08.441169387Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:08.441950 containerd[1293]: time="2024-06-25T16:24:08.441894729Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 16:24:08.442936 containerd[1293]: time="2024-06-25T16:24:08.442904613Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:08.443850 containerd[1293]: time="2024-06-25T16:24:08.443783627Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:24:08.453464 containerd[1293]: time="2024-06-25T16:24:08.453417503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:24:08.454039 kubelet[1964]: W0625 16:24:08.453981 1964 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:08.454114 kubelet[1964]: E0625 16:24:08.454042 1964 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:08.464245 containerd[1293]: time="2024-06-25T16:24:08.464215776Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:08.465916 containerd[1293]: time="2024-06-25T16:24:08.465865770Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:08.467288 containerd[1293]: time="2024-06-25T16:24:08.467246310Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:08.469231 containerd[1293]: time="2024-06-25T16:24:08.469192328Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:08.471763 containerd[1293]: time="2024-06-25T16:24:08.471725425Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:08.473398 containerd[1293]: time="2024-06-25T16:24:08.473373775Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:08.474208 containerd[1293]: time="2024-06-25T16:24:08.474157518Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.358393483s" Jun 25 16:24:08.475256 containerd[1293]: time="2024-06-25T16:24:08.475209482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.337288866s" Jun 25 16:24:08.475927 containerd[1293]: time="2024-06-25T16:24:08.475897544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:08.477097 containerd[1293]: time="2024-06-25T16:24:08.477039780Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.341824074s" Jun 25 16:24:08.478204 containerd[1293]: time="2024-06-25T16:24:08.478172397Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:08.479289 containerd[1293]: time="2024-06-25T16:24:08.479253989Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:08.599952 containerd[1293]: time="2024-06-25T16:24:08.599855047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:08.599952 containerd[1293]: time="2024-06-25T16:24:08.599899572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:08.599952 containerd[1293]: time="2024-06-25T16:24:08.599926694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:08.600192 containerd[1293]: time="2024-06-25T16:24:08.599953424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:08.600192 containerd[1293]: time="2024-06-25T16:24:08.600008259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:08.600192 containerd[1293]: time="2024-06-25T16:24:08.600028428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:08.600192 containerd[1293]: time="2024-06-25T16:24:08.600041963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:08.600476 containerd[1293]: time="2024-06-25T16:24:08.600279575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:08.600476 containerd[1293]: time="2024-06-25T16:24:08.600350190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:08.600566 containerd[1293]: time="2024-06-25T16:24:08.599935951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:08.601018 containerd[1293]: time="2024-06-25T16:24:08.600971184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:08.601068 containerd[1293]: time="2024-06-25T16:24:08.601043922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:08.616995 systemd[1]: Started cri-containerd-5b5003537fc179b1070489e98df469107ac0507537e1f7617d818d5f5e115a76.scope - libcontainer container 5b5003537fc179b1070489e98df469107ac0507537e1f7617d818d5f5e115a76. Jun 25 16:24:08.618110 systemd[1]: Started cri-containerd-d64b37f53a0c2eca122eec2d8e83d59981b9212fc9c8e02ddd37a434faeb7ea9.scope - libcontainer container d64b37f53a0c2eca122eec2d8e83d59981b9212fc9c8e02ddd37a434faeb7ea9. Jun 25 16:24:08.620374 systemd[1]: Started cri-containerd-360b4a93d5b8a55a30c75f8f377df54d30f0de2e25d4def78e1f952e17cabd96.scope - libcontainer container 360b4a93d5b8a55a30c75f8f377df54d30f0de2e25d4def78e1f952e17cabd96. Jun 25 16:24:08.627000 audit: BPF prog-id=55 op=LOAD Jun 25 16:24:08.629084 kernel: kauditd_printk_skb: 62 callbacks suppressed Jun 25 16:24:08.629172 kernel: audit: type=1334 audit(1719332648.627:276): prog-id=55 op=LOAD Jun 25 16:24:08.627000 audit: BPF prog-id=56 op=LOAD Jun 25 16:24:08.631069 kernel: audit: type=1334 audit(1719332648.627:277): prog-id=56 op=LOAD Jun 25 16:24:08.631118 kernel: audit: type=1300 audit(1719332648.627:277): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2028 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.627000 audit[2056]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2028 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562353030333533376663313739623130373034383965393864663436 Jun 25 16:24:08.638245 kernel: audit: type=1327 audit(1719332648.627:277): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562353030333533376663313739623130373034383965393864663436 Jun 25 16:24:08.638287 kernel: audit: type=1334 audit(1719332648.627:278): prog-id=57 op=LOAD Jun 25 16:24:08.627000 audit: BPF prog-id=57 op=LOAD Jun 25 16:24:08.627000 audit[2056]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2028 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.642550 kernel: audit: type=1300 audit(1719332648.627:278): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2028 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.642616 kernel: audit: type=1327 audit(1719332648.627:278): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562353030333533376663313739623130373034383965393864663436 Jun 25 16:24:08.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562353030333533376663313739623130373034383965393864663436 Jun 25 16:24:08.627000 audit: BPF prog-id=57 op=UNLOAD Jun 25 16:24:08.646462 kernel: audit: type=1334 audit(1719332648.627:279): prog-id=57 op=UNLOAD Jun 25 16:24:08.646499 kernel: audit: type=1334 audit(1719332648.627:280): prog-id=56 op=UNLOAD Jun 25 16:24:08.627000 audit: BPF prog-id=56 op=UNLOAD Jun 25 16:24:08.627000 audit: BPF prog-id=58 op=LOAD Jun 25 16:24:08.648079 kernel: audit: type=1334 audit(1719332648.627:281): prog-id=58 op=LOAD Jun 25 16:24:08.627000 audit[2056]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2028 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562353030333533376663313739623130373034383965393864663436 Jun 25 16:24:08.628000 audit: BPF prog-id=59 op=LOAD Jun 25 16:24:08.629000 audit: BPF prog-id=60 op=LOAD Jun 25 16:24:08.629000 audit[2055]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2026 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346233376635336130633265636131323265656332643865383364 Jun 25 16:24:08.629000 audit: BPF prog-id=61 op=LOAD Jun 25 16:24:08.629000 audit[2055]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2026 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346233376635336130633265636131323265656332643865383364 Jun 25 16:24:08.629000 audit: BPF prog-id=61 op=UNLOAD Jun 25 16:24:08.629000 audit: BPF prog-id=60 op=UNLOAD Jun 25 16:24:08.629000 audit: BPF prog-id=62 op=LOAD Jun 25 16:24:08.629000 audit[2055]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2026 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346233376635336130633265636131323265656332643865383364 Jun 25 16:24:08.630000 audit: BPF prog-id=63 op=LOAD Jun 25 16:24:08.630000 audit: BPF prog-id=64 op=LOAD Jun 25 16:24:08.630000 audit[2057]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2027 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306234613933643562386135356133306337356638663337376466 Jun 25 16:24:08.630000 audit: BPF prog-id=65 op=LOAD Jun 25 16:24:08.630000 audit[2057]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2027 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306234613933643562386135356133306337356638663337376466 Jun 25 16:24:08.630000 audit: BPF prog-id=65 op=UNLOAD Jun 25 16:24:08.630000 audit: BPF prog-id=64 op=UNLOAD Jun 25 16:24:08.630000 audit: BPF prog-id=66 op=LOAD Jun 25 16:24:08.630000 audit[2057]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2027 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306234613933643562386135356133306337356638663337376466 Jun 25 16:24:08.673862 containerd[1293]: time="2024-06-25T16:24:08.673767220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"360b4a93d5b8a55a30c75f8f377df54d30f0de2e25d4def78e1f952e17cabd96\"" Jun 25 16:24:08.674006 containerd[1293]: time="2024-06-25T16:24:08.673802768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9fdc2ce3a3af715490ad36b8ba15f6a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b5003537fc179b1070489e98df469107ac0507537e1f7617d818d5f5e115a76\"" Jun 25 16:24:08.675750 kubelet[1964]: E0625 16:24:08.675710 1964 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:08.675942 containerd[1293]: time="2024-06-25T16:24:08.675921094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"d64b37f53a0c2eca122eec2d8e83d59981b9212fc9c8e02ddd37a434faeb7ea9\"" Jun 25 16:24:08.676166 kubelet[1964]: E0625 16:24:08.676151 1964 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:08.676399 kubelet[1964]: E0625 16:24:08.676379 1964 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:08.679198 containerd[1293]: time="2024-06-25T16:24:08.679162099Z" level=info msg="CreateContainer within sandbox \"360b4a93d5b8a55a30c75f8f377df54d30f0de2e25d4def78e1f952e17cabd96\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:24:08.679287 containerd[1293]: time="2024-06-25T16:24:08.679255847Z" level=info msg="CreateContainer within sandbox \"d64b37f53a0c2eca122eec2d8e83d59981b9212fc9c8e02ddd37a434faeb7ea9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:24:08.679407 containerd[1293]: time="2024-06-25T16:24:08.679269524Z" level=info msg="CreateContainer within sandbox \"5b5003537fc179b1070489e98df469107ac0507537e1f7617d818d5f5e115a76\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:24:08.808916 kubelet[1964]: W0625 16:24:08.808871 1964 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:08.808916 kubelet[1964]: E0625 16:24:08.808918 1964 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused Jun 25 16:24:08.845921 containerd[1293]: time="2024-06-25T16:24:08.845792296Z" level=info msg="CreateContainer within sandbox \"d64b37f53a0c2eca122eec2d8e83d59981b9212fc9c8e02ddd37a434faeb7ea9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"82c9f8e069cc94e542ce39769a87f8049940fe060e31a2ae9fb843ffc1c2b97f\"" Jun 25 16:24:08.846677 containerd[1293]: time="2024-06-25T16:24:08.846634170Z" level=info msg="StartContainer for \"82c9f8e069cc94e542ce39769a87f8049940fe060e31a2ae9fb843ffc1c2b97f\"" Jun 25 16:24:08.850500 containerd[1293]: time="2024-06-25T16:24:08.850447496Z" level=info msg="CreateContainer within sandbox \"5b5003537fc179b1070489e98df469107ac0507537e1f7617d818d5f5e115a76\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3ec13071e221bd2c11057f25736d86f549cd1c1a39aee735689ec96de7e1e35b\"" Jun 25 16:24:08.851061 containerd[1293]: time="2024-06-25T16:24:08.851031789Z" level=info msg="StartContainer for \"3ec13071e221bd2c11057f25736d86f549cd1c1a39aee735689ec96de7e1e35b\"" Jun 25 16:24:08.852031 containerd[1293]: time="2024-06-25T16:24:08.851994633Z" level=info msg="CreateContainer within sandbox \"360b4a93d5b8a55a30c75f8f377df54d30f0de2e25d4def78e1f952e17cabd96\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d409a632d1c6daa2d0d062db239cf0bc0c8077f60b3f54dd547b06ffadce3319\"" Jun 25 16:24:08.852489 containerd[1293]: time="2024-06-25T16:24:08.852439340Z" level=info msg="StartContainer for \"d409a632d1c6daa2d0d062db239cf0bc0c8077f60b3f54dd547b06ffadce3319\"" Jun 25 16:24:08.870020 systemd[1]: Started cri-containerd-82c9f8e069cc94e542ce39769a87f8049940fe060e31a2ae9fb843ffc1c2b97f.scope - libcontainer container 82c9f8e069cc94e542ce39769a87f8049940fe060e31a2ae9fb843ffc1c2b97f. Jun 25 16:24:08.872067 systemd[1]: Started cri-containerd-3ec13071e221bd2c11057f25736d86f549cd1c1a39aee735689ec96de7e1e35b.scope - libcontainer container 3ec13071e221bd2c11057f25736d86f549cd1c1a39aee735689ec96de7e1e35b. Jun 25 16:24:08.876938 systemd[1]: Started cri-containerd-d409a632d1c6daa2d0d062db239cf0bc0c8077f60b3f54dd547b06ffadce3319.scope - libcontainer container d409a632d1c6daa2d0d062db239cf0bc0c8077f60b3f54dd547b06ffadce3319. Jun 25 16:24:08.881000 audit: BPF prog-id=67 op=LOAD Jun 25 16:24:08.881000 audit: BPF prog-id=68 op=LOAD Jun 25 16:24:08.881000 audit[2141]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2026 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832633966386530363963633934653534326365333937363961383766 Jun 25 16:24:08.881000 audit: BPF prog-id=69 op=LOAD Jun 25 16:24:08.881000 audit[2141]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2026 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832633966386530363963633934653534326365333937363961383766 Jun 25 16:24:08.881000 audit: BPF prog-id=69 op=UNLOAD Jun 25 16:24:08.881000 audit: BPF prog-id=68 op=UNLOAD Jun 25 16:24:08.881000 audit: BPF prog-id=70 op=LOAD Jun 25 16:24:08.881000 audit[2141]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2026 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832633966386530363963633934653534326365333937363961383766 Jun 25 16:24:08.884000 audit: BPF prog-id=71 op=LOAD Jun 25 16:24:08.885000 audit: BPF prog-id=72 op=LOAD Jun 25 16:24:08.885000 audit[2156]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2028 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365633133303731653232316264326331313035376632353733366438 Jun 25 16:24:08.885000 audit: BPF prog-id=73 op=LOAD Jun 25 16:24:08.885000 audit[2156]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2028 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365633133303731653232316264326331313035376632353733366438 Jun 25 16:24:08.885000 audit: BPF prog-id=73 op=UNLOAD Jun 25 16:24:08.885000 audit: BPF prog-id=72 op=UNLOAD Jun 25 16:24:08.885000 audit: BPF prog-id=74 op=LOAD Jun 25 16:24:08.885000 audit[2156]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2028 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365633133303731653232316264326331313035376632353733366438 Jun 25 16:24:08.891000 audit: BPF prog-id=75 op=LOAD Jun 25 16:24:08.892000 audit: BPF prog-id=76 op=LOAD Jun 25 16:24:08.892000 audit[2161]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013b988 a2=78 a3=0 items=0 ppid=2027 pid=2161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434303961363332643163366461613264306430363264623233396366 Jun 25 16:24:08.892000 audit: BPF prog-id=77 op=LOAD Jun 25 16:24:08.892000 audit[2161]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00013b720 a2=78 a3=0 items=0 ppid=2027 pid=2161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434303961363332643163366461613264306430363264623233396366 Jun 25 16:24:08.892000 audit: BPF prog-id=77 op=UNLOAD Jun 25 16:24:08.892000 audit: BPF prog-id=76 op=UNLOAD Jun 25 16:24:08.892000 audit: BPF prog-id=78 op=LOAD Jun 25 16:24:08.892000 audit[2161]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00013bbe0 a2=78 a3=0 items=0 ppid=2027 pid=2161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:08.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434303961363332643163366461613264306430363264623233396366 Jun 25 16:24:08.922034 containerd[1293]: time="2024-06-25T16:24:08.921958073Z" level=info msg="StartContainer for \"3ec13071e221bd2c11057f25736d86f549cd1c1a39aee735689ec96de7e1e35b\" returns successfully" Jun 25 16:24:08.922189 containerd[1293]: time="2024-06-25T16:24:08.922108299Z" level=info msg="StartContainer for \"82c9f8e069cc94e542ce39769a87f8049940fe060e31a2ae9fb843ffc1c2b97f\" returns successfully" Jun 25 16:24:08.932297 containerd[1293]: time="2024-06-25T16:24:08.932227569Z" level=info msg="StartContainer for \"d409a632d1c6daa2d0d062db239cf0bc0c8077f60b3f54dd547b06ffadce3319\" returns successfully" Jun 25 16:24:09.304497 kubelet[1964]: E0625 16:24:09.304370 1964 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:09.306495 kubelet[1964]: E0625 16:24:09.306466 1964 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:09.308484 kubelet[1964]: E0625 16:24:09.308389 1964 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:09.641000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:09.641000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0004a4060 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:24:09.641000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:09.642000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:09.642000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000d5a000 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:24:09.642000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:09.743000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:09.743000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c003b66030 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:24:09.743000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:24:09.743000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:09.743000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=44 a1=c0063a2000 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:24:09.743000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:24:09.743000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:09.743000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=44 a1=c003b66090 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:24:09.743000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:24:09.745000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:09.745000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=49 a1=c004da1dd0 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:24:09.745000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:24:09.745000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:09.745000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4b a1=c005878140 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:24:09.745000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:24:09.746000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:09.746000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4d a1=c004702750 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:24:09.746000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:24:10.226705 kubelet[1964]: E0625 16:24:10.226645 1964 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 16:24:10.310657 kubelet[1964]: E0625 16:24:10.310630 1964 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:10.589939 kubelet[1964]: E0625 16:24:10.589885 1964 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 16:24:11.014686 kubelet[1964]: E0625 16:24:11.014529 1964 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 16:24:11.477140 kubelet[1964]: E0625 16:24:11.477098 1964 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 16:24:11.579904 kubelet[1964]: I0625 16:24:11.579873 1964 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:24:11.586749 kubelet[1964]: I0625 16:24:11.586699 1964 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 16:24:11.597378 kubelet[1964]: E0625 16:24:11.597334 1964 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:24:11.697949 kubelet[1964]: E0625 16:24:11.697894 1964 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:24:11.798865 kubelet[1964]: E0625 16:24:11.798677 1964 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:24:11.899392 kubelet[1964]: E0625 16:24:11.899339 1964 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:24:12.000160 kubelet[1964]: E0625 16:24:12.000063 1964 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:24:12.101108 kubelet[1964]: E0625 16:24:12.101070 1964 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:24:12.201546 kubelet[1964]: E0625 16:24:12.201478 1964 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:24:12.258482 systemd[1]: Reloading. Jun 25 16:24:12.268000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=7786 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:24:12.268000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000baac40 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:24:12.268000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:12.301941 kubelet[1964]: E0625 16:24:12.301898 1964 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:24:12.402192 kubelet[1964]: E0625 16:24:12.402053 1964 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:24:12.410781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:24:12.485000 audit: BPF prog-id=79 op=LOAD Jun 25 16:24:12.485000 audit: BPF prog-id=75 op=UNLOAD Jun 25 16:24:12.486000 audit: BPF prog-id=80 op=LOAD Jun 25 16:24:12.486000 audit: BPF prog-id=41 op=UNLOAD Jun 25 16:24:12.486000 audit: BPF prog-id=81 op=LOAD Jun 25 16:24:12.486000 audit: BPF prog-id=82 op=LOAD Jun 25 16:24:12.486000 audit: BPF prog-id=42 op=UNLOAD Jun 25 16:24:12.486000 audit: BPF prog-id=43 op=UNLOAD Jun 25 16:24:12.486000 audit: BPF prog-id=83 op=LOAD Jun 25 16:24:12.486000 audit: BPF prog-id=63 op=UNLOAD Jun 25 16:24:12.489000 audit: BPF prog-id=84 op=LOAD Jun 25 16:24:12.489000 audit: BPF prog-id=44 op=UNLOAD Jun 25 16:24:12.489000 audit: BPF prog-id=85 op=LOAD Jun 25 16:24:12.489000 audit: BPF prog-id=55 op=UNLOAD Jun 25 16:24:12.491000 audit: BPF prog-id=86 op=LOAD Jun 25 16:24:12.491000 audit: BPF prog-id=45 op=UNLOAD Jun 25 16:24:12.492000 audit: BPF prog-id=87 op=LOAD Jun 25 16:24:12.492000 audit: BPF prog-id=71 op=UNLOAD Jun 25 16:24:12.493000 audit: BPF prog-id=88 op=LOAD Jun 25 16:24:12.493000 audit: BPF prog-id=67 op=UNLOAD Jun 25 16:24:12.493000 audit: BPF prog-id=89 op=LOAD Jun 25 16:24:12.493000 audit: BPF prog-id=46 op=UNLOAD Jun 25 16:24:12.493000 audit: BPF prog-id=90 op=LOAD Jun 25 16:24:12.493000 audit: BPF prog-id=91 op=LOAD Jun 25 16:24:12.493000 audit: BPF prog-id=47 op=UNLOAD Jun 25 16:24:12.493000 audit: BPF prog-id=48 op=UNLOAD Jun 25 16:24:12.494000 audit: BPF prog-id=92 op=LOAD Jun 25 16:24:12.494000 audit: BPF prog-id=59 op=UNLOAD Jun 25 16:24:12.494000 audit: BPF prog-id=93 op=LOAD Jun 25 16:24:12.494000 audit: BPF prog-id=49 op=UNLOAD Jun 25 16:24:12.495000 audit: BPF prog-id=94 op=LOAD Jun 25 16:24:12.495000 audit: BPF prog-id=95 op=LOAD Jun 25 16:24:12.495000 audit: BPF prog-id=50 op=UNLOAD Jun 25 16:24:12.495000 audit: BPF prog-id=51 op=UNLOAD Jun 25 16:24:12.496000 audit: BPF prog-id=96 op=LOAD Jun 25 16:24:12.496000 audit: BPF prog-id=52 op=UNLOAD Jun 25 16:24:12.496000 audit: BPF prog-id=97 op=LOAD Jun 25 16:24:12.496000 audit: BPF prog-id=98 op=LOAD Jun 25 16:24:12.496000 audit: BPF prog-id=53 op=UNLOAD Jun 25 16:24:12.496000 audit: BPF prog-id=54 op=UNLOAD Jun 25 16:24:12.503118 kubelet[1964]: E0625 16:24:12.503069 1964 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 25 16:24:12.508070 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:12.508386 kubelet[1964]: E0625 16:24:12.508184 1964 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.17dc4bee739ae936 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-06-25 16:24:05.262518582 +0000 UTC m=+0.536105731,LastTimestamp:2024-06-25 16:24:05.262518582 +0000 UTC m=+0.536105731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 25 16:24:12.530253 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:24:12.530488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:12.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.538180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:12.651159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:12.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.694283 kubelet[2308]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:24:12.694283 kubelet[2308]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:24:12.694283 kubelet[2308]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:24:12.694696 kubelet[2308]: I0625 16:24:12.694264 2308 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:24:12.699418 kubelet[2308]: I0625 16:24:12.699363 2308 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 16:24:12.699418 kubelet[2308]: I0625 16:24:12.699400 2308 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:24:12.699726 kubelet[2308]: I0625 16:24:12.699702 2308 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 16:24:12.701240 kubelet[2308]: I0625 16:24:12.701210 2308 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:24:12.702564 kubelet[2308]: I0625 16:24:12.702531 2308 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:24:12.711171 kubelet[2308]: I0625 16:24:12.711136 2308 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:24:12.711678 kubelet[2308]: I0625 16:24:12.711357 2308 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:24:12.711897 kubelet[2308]: I0625 16:24:12.711387 2308 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:24:12.712073 kubelet[2308]: I0625 16:24:12.712062 2308 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:24:12.712138 kubelet[2308]: I0625 16:24:12.712131 2308 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:24:12.712233 kubelet[2308]: I0625 16:24:12.712223 2308 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:24:12.713176 kubelet[2308]: I0625 16:24:12.713144 2308 kubelet.go:400] "Attempting to sync node with API server" Jun 25 16:24:12.713239 kubelet[2308]: I0625 16:24:12.713180 2308 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:24:12.713239 kubelet[2308]: I0625 16:24:12.713213 2308 kubelet.go:312] "Adding apiserver pod source" Jun 25 16:24:12.713239 kubelet[2308]: I0625 16:24:12.713226 2308 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:24:12.716842 kubelet[2308]: I0625 16:24:12.714920 2308 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:24:12.716842 kubelet[2308]: I0625 16:24:12.715146 2308 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 16:24:12.716842 kubelet[2308]: I0625 16:24:12.715643 2308 server.go:1264] "Started kubelet" Jun 25 16:24:12.716842 kubelet[2308]: I0625 16:24:12.716774 2308 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:24:12.718038 kubelet[2308]: I0625 16:24:12.717985 2308 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 16:24:12.719103 kubelet[2308]: I0625 16:24:12.719087 2308 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:24:12.719186 kubelet[2308]: I0625 16:24:12.718770 2308 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:24:12.722678 kubelet[2308]: I0625 16:24:12.722651 2308 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:24:12.722762 kubelet[2308]: I0625 16:24:12.718200 2308 server.go:455] "Adding debug handlers to kubelet server" Jun 25 16:24:12.723437 kubelet[2308]: I0625 16:24:12.723415 2308 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 16:24:12.723575 kubelet[2308]: I0625 16:24:12.723554 2308 reconciler.go:26] "Reconciler: start to sync state" Jun 25 16:24:12.725578 kubelet[2308]: I0625 16:24:12.725559 2308 factory.go:221] Registration of the systemd container factory successfully Jun 25 16:24:12.725792 kubelet[2308]: I0625 16:24:12.725750 2308 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 16:24:12.727998 kubelet[2308]: I0625 16:24:12.727760 2308 factory.go:221] Registration of the containerd container factory successfully Jun 25 16:24:12.730055 kubelet[2308]: E0625 16:24:12.730030 2308 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:24:12.732379 kubelet[2308]: I0625 16:24:12.732317 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:24:12.733251 kubelet[2308]: I0625 16:24:12.733223 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:24:12.733251 kubelet[2308]: I0625 16:24:12.733252 2308 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:24:12.733322 kubelet[2308]: I0625 16:24:12.733271 2308 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 16:24:12.733350 kubelet[2308]: E0625 16:24:12.733324 2308 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:24:12.766381 kubelet[2308]: I0625 16:24:12.766340 2308 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:24:12.766381 kubelet[2308]: I0625 16:24:12.766360 2308 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:24:12.766381 kubelet[2308]: I0625 16:24:12.766380 2308 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:24:12.766589 kubelet[2308]: I0625 16:24:12.766520 2308 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:24:12.766589 kubelet[2308]: I0625 16:24:12.766530 2308 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:24:12.766589 kubelet[2308]: I0625 16:24:12.766590 2308 policy_none.go:49] "None policy: Start" Jun 25 16:24:12.767120 kubelet[2308]: I0625 16:24:12.767098 2308 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 16:24:12.767120 kubelet[2308]: I0625 16:24:12.767117 2308 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:24:12.767291 kubelet[2308]: I0625 16:24:12.767259 2308 state_mem.go:75] "Updated machine memory state" Jun 25 16:24:12.771224 kubelet[2308]: I0625 16:24:12.771193 2308 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:24:12.771375 kubelet[2308]: I0625 16:24:12.771342 2308 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 16:24:12.771972 kubelet[2308]: I0625 16:24:12.771667 2308 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:24:12.834130 kubelet[2308]: I0625 16:24:12.834068 2308 topology_manager.go:215] "Topology Admit Handler" podUID="9fdc2ce3a3af715490ad36b8ba15f6a8" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:24:12.834298 kubelet[2308]: I0625 16:24:12.834193 2308 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:24:12.834298 kubelet[2308]: I0625 16:24:12.834261 2308 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:24:12.877111 kubelet[2308]: I0625 16:24:12.877078 2308 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jun 25 16:24:12.925512 kubelet[2308]: I0625 16:24:12.925482 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fdc2ce3a3af715490ad36b8ba15f6a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9fdc2ce3a3af715490ad36b8ba15f6a8\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:24:12.925644 kubelet[2308]: I0625 16:24:12.925520 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:24:12.925644 kubelet[2308]: I0625 16:24:12.925540 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:24:12.925644 kubelet[2308]: I0625 16:24:12.925552 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fdc2ce3a3af715490ad36b8ba15f6a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9fdc2ce3a3af715490ad36b8ba15f6a8\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:24:12.925644 kubelet[2308]: I0625 16:24:12.925570 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fdc2ce3a3af715490ad36b8ba15f6a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9fdc2ce3a3af715490ad36b8ba15f6a8\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:24:12.925644 kubelet[2308]: I0625 16:24:12.925584 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:24:12.925746 kubelet[2308]: I0625 16:24:12.925609 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:24:12.925746 kubelet[2308]: I0625 16:24:12.925637 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:24:12.925786 kubelet[2308]: I0625 16:24:12.925728 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:24:13.194655 kubelet[2308]: E0625 16:24:13.194580 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:13.194846 kubelet[2308]: E0625 16:24:13.194806 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:13.195111 kubelet[2308]: E0625 16:24:13.195096 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:13.220462 kubelet[2308]: I0625 16:24:13.220408 2308 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jun 25 16:24:13.220647 kubelet[2308]: I0625 16:24:13.220553 2308 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jun 25 16:24:13.714846 kubelet[2308]: I0625 16:24:13.714790 2308 apiserver.go:52] "Watching apiserver" Jun 25 16:24:13.723646 kubelet[2308]: I0625 16:24:13.723602 2308 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 16:24:13.750751 kubelet[2308]: E0625 16:24:13.750718 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:13.751853 kubelet[2308]: E0625 16:24:13.751841 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:13.911116 kubelet[2308]: E0625 16:24:13.911063 2308 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 16:24:13.911496 kubelet[2308]: E0625 16:24:13.911475 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:14.130702 kubelet[2308]: I0625 16:24:14.130629 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.130606672 podStartE2EDuration="1.130606672s" podCreationTimestamp="2024-06-25 16:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:24:13.925406872 +0000 UTC m=+1.270066905" watchObservedRunningTime="2024-06-25 16:24:14.130606672 +0000 UTC m=+1.475266705" Jun 25 16:24:14.130944 kubelet[2308]: I0625 16:24:14.130754 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.130749503 podStartE2EDuration="1.130749503s" podCreationTimestamp="2024-06-25 16:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:24:14.130584761 +0000 UTC m=+1.475244794" watchObservedRunningTime="2024-06-25 16:24:14.130749503 +0000 UTC m=+1.475409536" Jun 25 16:24:14.187387 kubelet[2308]: I0625 16:24:14.187317 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.187295062 podStartE2EDuration="1.187295062s" podCreationTimestamp="2024-06-25 16:24:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:24:14.187124669 +0000 UTC m=+1.531784702" watchObservedRunningTime="2024-06-25 16:24:14.187295062 +0000 UTC m=+1.531955095" Jun 25 16:24:14.373000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:14.389236 kernel: kauditd_printk_skb: 131 callbacks suppressed Jun 25 16:24:14.389404 kernel: audit: type=1400 audit(1719332654.373:363): avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:14.373000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00111a520 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:24:14.395952 kernel: audit: type=1300 audit(1719332654.373:363): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00111a520 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:24:14.396015 kernel: audit: type=1327 audit(1719332654.373:363): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:14.373000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:14.399049 kernel: audit: type=1400 audit(1719332654.374:364): avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:14.374000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:14.402195 kernel: audit: type=1300 audit(1719332654.374:364): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00111a560 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:24:14.374000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00111a560 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:24:14.406351 kernel: audit: type=1327 audit(1719332654.374:364): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:14.374000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:14.375000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:14.418872 kernel: audit: type=1400 audit(1719332654.375:365): avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:14.418943 kernel: audit: type=1300 audit(1719332654.375:365): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00111a880 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:24:14.375000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00111a880 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:24:14.422457 kernel: audit: type=1327 audit(1719332654.375:365): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:14.375000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:14.375000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:14.428471 kernel: audit: type=1400 audit(1719332654.375:366): avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:24:14.375000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0013192a0 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:24:14.375000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:24:14.751891 kubelet[2308]: E0625 16:24:14.751776 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:14.934012 update_engine[1280]: I0625 16:24:14.933967 1280 update_attempter.cc:509] Updating boot flags... Jun 25 16:24:15.031565 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2356) Jun 25 16:24:15.273901 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2358) Jun 25 16:24:15.298859 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2358) Jun 25 16:24:15.753755 kubelet[2308]: E0625 16:24:15.753690 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:16.201520 kubelet[2308]: E0625 16:24:16.201478 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:20.803962 sudo[1432]: pam_unix(sudo:session): session closed for user root Jun 25 16:24:20.803000 audit[1432]: USER_END pid=1432 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:24:20.807570 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 16:24:20.807634 kernel: audit: type=1106 audit(1719332660.803:367): pid=1432 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:24:20.807665 kernel: audit: type=1104 audit(1719332660.803:368): pid=1432 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:24:20.803000 audit[1432]: CRED_DISP pid=1432 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:24:20.813650 sshd[1429]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:20.813000 audit[1429]: USER_END pid=1429 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:20.815746 systemd[1]: sshd@6-10.0.0.104:22-10.0.0.1:35438.service: Deactivated successfully. Jun 25 16:24:20.816435 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:24:20.816571 systemd[1]: session-7.scope: Consumed 6.429s CPU time. Jun 25 16:24:20.817094 systemd-logind[1278]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:24:20.813000 audit[1429]: CRED_DISP pid=1429 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:20.817911 systemd-logind[1278]: Removed session 7. Jun 25 16:24:20.820539 kernel: audit: type=1106 audit(1719332660.813:369): pid=1429 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:20.820581 kernel: audit: type=1104 audit(1719332660.813:370): pid=1429 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:20.820598 kernel: audit: type=1131 audit(1719332660.815:371): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.104:22-10.0.0.1:35438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:20.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.104:22-10.0.0.1:35438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:21.982745 kubelet[2308]: E0625 16:24:21.982703 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:22.630448 kubelet[2308]: E0625 16:24:22.630422 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:22.762865 kubelet[2308]: E0625 16:24:22.762802 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:22.763712 kubelet[2308]: E0625 16:24:22.763038 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:23.763919 kubelet[2308]: E0625 16:24:23.763882 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:26.204969 kubelet[2308]: E0625 16:24:26.204929 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:27.328595 kubelet[2308]: I0625 16:24:27.328569 2308 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:24:27.329610 containerd[1293]: time="2024-06-25T16:24:27.329553793Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:24:27.329868 kubelet[2308]: I0625 16:24:27.329784 2308 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:24:27.803801 kubelet[2308]: I0625 16:24:27.803644 2308 topology_manager.go:215] "Topology Admit Handler" podUID="7ed727b7-c31d-4560-8ec5-c148c8b6dca4" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-8m8sx" Jun 25 16:24:27.809344 systemd[1]: Created slice kubepods-besteffort-pod7ed727b7_c31d_4560_8ec5_c148c8b6dca4.slice - libcontainer container kubepods-besteffort-pod7ed727b7_c31d_4560_8ec5_c148c8b6dca4.slice. Jun 25 16:24:27.820459 kubelet[2308]: I0625 16:24:27.820395 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ed727b7-c31d-4560-8ec5-c148c8b6dca4-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-8m8sx\" (UID: \"7ed727b7-c31d-4560-8ec5-c148c8b6dca4\") " pod="tigera-operator/tigera-operator-76ff79f7fd-8m8sx" Jun 25 16:24:27.820608 kubelet[2308]: I0625 16:24:27.820472 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb82g\" (UniqueName: \"kubernetes.io/projected/7ed727b7-c31d-4560-8ec5-c148c8b6dca4-kube-api-access-nb82g\") pod \"tigera-operator-76ff79f7fd-8m8sx\" (UID: \"7ed727b7-c31d-4560-8ec5-c148c8b6dca4\") " pod="tigera-operator/tigera-operator-76ff79f7fd-8m8sx" Jun 25 16:24:28.041634 kubelet[2308]: I0625 16:24:28.041585 2308 topology_manager.go:215] "Topology Admit Handler" podUID="af392a2b-7a44-42c0-a83d-60ba3baec284" podNamespace="kube-system" podName="kube-proxy-7xwnc" Jun 25 16:24:28.047587 systemd[1]: Created slice kubepods-besteffort-podaf392a2b_7a44_42c0_a83d_60ba3baec284.slice - libcontainer container kubepods-besteffort-podaf392a2b_7a44_42c0_a83d_60ba3baec284.slice. Jun 25 16:24:28.122848 kubelet[2308]: I0625 16:24:28.122745 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af392a2b-7a44-42c0-a83d-60ba3baec284-kube-proxy\") pod \"kube-proxy-7xwnc\" (UID: \"af392a2b-7a44-42c0-a83d-60ba3baec284\") " pod="kube-system/kube-proxy-7xwnc" Jun 25 16:24:28.122848 kubelet[2308]: I0625 16:24:28.122802 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af392a2b-7a44-42c0-a83d-60ba3baec284-xtables-lock\") pod \"kube-proxy-7xwnc\" (UID: \"af392a2b-7a44-42c0-a83d-60ba3baec284\") " pod="kube-system/kube-proxy-7xwnc" Jun 25 16:24:28.122848 kubelet[2308]: I0625 16:24:28.122848 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qwxj\" (UniqueName: \"kubernetes.io/projected/af392a2b-7a44-42c0-a83d-60ba3baec284-kube-api-access-6qwxj\") pod \"kube-proxy-7xwnc\" (UID: \"af392a2b-7a44-42c0-a83d-60ba3baec284\") " pod="kube-system/kube-proxy-7xwnc" Jun 25 16:24:28.123118 kubelet[2308]: I0625 16:24:28.122884 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af392a2b-7a44-42c0-a83d-60ba3baec284-lib-modules\") pod \"kube-proxy-7xwnc\" (UID: \"af392a2b-7a44-42c0-a83d-60ba3baec284\") " pod="kube-system/kube-proxy-7xwnc" Jun 25 16:24:28.123883 containerd[1293]: time="2024-06-25T16:24:28.123803449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-8m8sx,Uid:7ed727b7-c31d-4560-8ec5-c148c8b6dca4,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:24:28.350643 kubelet[2308]: E0625 16:24:28.350570 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:28.351212 containerd[1293]: time="2024-06-25T16:24:28.351157693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7xwnc,Uid:af392a2b-7a44-42c0-a83d-60ba3baec284,Namespace:kube-system,Attempt:0,}" Jun 25 16:24:28.371696 containerd[1293]: time="2024-06-25T16:24:28.371590209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:28.371696 containerd[1293]: time="2024-06-25T16:24:28.371659820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:28.371696 containerd[1293]: time="2024-06-25T16:24:28.371681010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:28.371696 containerd[1293]: time="2024-06-25T16:24:28.371695357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:28.392026 systemd[1]: Started cri-containerd-fce6b9fb3deea56495840e1a59754cbcee443299ca5ea8b7dd89b1a15a028ccd.scope - libcontainer container fce6b9fb3deea56495840e1a59754cbcee443299ca5ea8b7dd89b1a15a028ccd. Jun 25 16:24:28.399000 audit: BPF prog-id=99 op=LOAD Jun 25 16:24:28.400000 audit: BPF prog-id=100 op=LOAD Jun 25 16:24:28.403141 kernel: audit: type=1334 audit(1719332668.399:372): prog-id=99 op=LOAD Jun 25 16:24:28.403186 kernel: audit: type=1334 audit(1719332668.400:373): prog-id=100 op=LOAD Jun 25 16:24:28.403211 kernel: audit: type=1300 audit(1719332668.400:373): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2421 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.400000 audit[2431]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2421 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.406434 kernel: audit: type=1327 audit(1719332668.400:373): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663653662396662336465656135363439353834306531613539373534 Jun 25 16:24:28.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663653662396662336465656135363439353834306531613539373534 Jun 25 16:24:28.409559 kernel: audit: type=1334 audit(1719332668.400:374): prog-id=101 op=LOAD Jun 25 16:24:28.400000 audit: BPF prog-id=101 op=LOAD Jun 25 16:24:28.400000 audit[2431]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2421 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.414055 kernel: audit: type=1300 audit(1719332668.400:374): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2421 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.414115 kernel: audit: type=1327 audit(1719332668.400:374): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663653662396662336465656135363439353834306531613539373534 Jun 25 16:24:28.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663653662396662336465656135363439353834306531613539373534 Jun 25 16:24:28.417376 kernel: audit: type=1334 audit(1719332668.400:375): prog-id=101 op=UNLOAD Jun 25 16:24:28.400000 audit: BPF prog-id=101 op=UNLOAD Jun 25 16:24:28.418300 kernel: audit: type=1334 audit(1719332668.400:376): prog-id=100 op=UNLOAD Jun 25 16:24:28.400000 audit: BPF prog-id=100 op=UNLOAD Jun 25 16:24:28.419191 kernel: audit: type=1334 audit(1719332668.400:377): prog-id=102 op=LOAD Jun 25 16:24:28.400000 audit: BPF prog-id=102 op=LOAD Jun 25 16:24:28.400000 audit[2431]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2421 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663653662396662336465656135363439353834306531613539373534 Jun 25 16:24:28.434590 containerd[1293]: time="2024-06-25T16:24:28.434548203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-8m8sx,Uid:7ed727b7-c31d-4560-8ec5-c148c8b6dca4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fce6b9fb3deea56495840e1a59754cbcee443299ca5ea8b7dd89b1a15a028ccd\"" Jun 25 16:24:28.436973 containerd[1293]: time="2024-06-25T16:24:28.436903599Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:24:28.625202 containerd[1293]: time="2024-06-25T16:24:28.625060781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:28.625202 containerd[1293]: time="2024-06-25T16:24:28.625165569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:28.625202 containerd[1293]: time="2024-06-25T16:24:28.625187039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:28.625202 containerd[1293]: time="2024-06-25T16:24:28.625197179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:28.651069 systemd[1]: Started cri-containerd-d0a8a9d4d1a304e93ef69c72a89ac9f184ce4ca15d80a7cc79618b45ce2be52f.scope - libcontainer container d0a8a9d4d1a304e93ef69c72a89ac9f184ce4ca15d80a7cc79618b45ce2be52f. Jun 25 16:24:28.658000 audit: BPF prog-id=103 op=LOAD Jun 25 16:24:28.658000 audit: BPF prog-id=104 op=LOAD Jun 25 16:24:28.658000 audit[2472]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2461 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430613861396434643161333034653933656636396337326138396163 Jun 25 16:24:28.658000 audit: BPF prog-id=105 op=LOAD Jun 25 16:24:28.658000 audit[2472]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2461 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430613861396434643161333034653933656636396337326138396163 Jun 25 16:24:28.658000 audit: BPF prog-id=105 op=UNLOAD Jun 25 16:24:28.658000 audit: BPF prog-id=104 op=UNLOAD Jun 25 16:24:28.658000 audit: BPF prog-id=106 op=LOAD Jun 25 16:24:28.658000 audit[2472]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2461 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430613861396434643161333034653933656636396337326138396163 Jun 25 16:24:28.670695 containerd[1293]: time="2024-06-25T16:24:28.670645992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7xwnc,Uid:af392a2b-7a44-42c0-a83d-60ba3baec284,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0a8a9d4d1a304e93ef69c72a89ac9f184ce4ca15d80a7cc79618b45ce2be52f\"" Jun 25 16:24:28.671303 kubelet[2308]: E0625 16:24:28.671280 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:28.673093 containerd[1293]: time="2024-06-25T16:24:28.673046493Z" level=info msg="CreateContainer within sandbox \"d0a8a9d4d1a304e93ef69c72a89ac9f184ce4ca15d80a7cc79618b45ce2be52f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:24:29.268265 containerd[1293]: time="2024-06-25T16:24:29.268181322Z" level=info msg="CreateContainer within sandbox \"d0a8a9d4d1a304e93ef69c72a89ac9f184ce4ca15d80a7cc79618b45ce2be52f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef6bdf2a5802b70e208f93d9d752436a2ec7991b3be8a86b72a64c87ff5d7887\"" Jun 25 16:24:29.269213 containerd[1293]: time="2024-06-25T16:24:29.269158011Z" level=info msg="StartContainer for \"ef6bdf2a5802b70e208f93d9d752436a2ec7991b3be8a86b72a64c87ff5d7887\"" Jun 25 16:24:29.301138 systemd[1]: Started cri-containerd-ef6bdf2a5802b70e208f93d9d752436a2ec7991b3be8a86b72a64c87ff5d7887.scope - libcontainer container ef6bdf2a5802b70e208f93d9d752436a2ec7991b3be8a86b72a64c87ff5d7887. Jun 25 16:24:29.311000 audit: BPF prog-id=107 op=LOAD Jun 25 16:24:29.311000 audit[2503]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2461 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366264663261353830326237306532303866393364396437353234 Jun 25 16:24:29.312000 audit: BPF prog-id=108 op=LOAD Jun 25 16:24:29.312000 audit[2503]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2461 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366264663261353830326237306532303866393364396437353234 Jun 25 16:24:29.312000 audit: BPF prog-id=108 op=UNLOAD Jun 25 16:24:29.312000 audit: BPF prog-id=107 op=UNLOAD Jun 25 16:24:29.312000 audit: BPF prog-id=109 op=LOAD Jun 25 16:24:29.312000 audit[2503]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2461 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366264663261353830326237306532303866393364396437353234 Jun 25 16:24:29.386000 audit[2555]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.386000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe0bab2e10 a2=0 a3=7ffe0bab2dfc items=0 ppid=2514 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.386000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:24:29.387000 audit[2557]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.387000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd096cd30 a2=0 a3=7ffcd096cd1c items=0 ppid=2514 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.387000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:24:29.388000 audit[2556]: NETFILTER_CFG table=mangle:40 family=10 entries=1 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.388000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc942a1160 a2=0 a3=7ffc942a114c items=0 ppid=2514 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.388000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:24:29.388000 audit[2558]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.388000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc13e131a0 a2=0 a3=7ffc13e1318c items=0 ppid=2514 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.388000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:24:29.390000 audit[2559]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.390000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb04c1c10 a2=0 a3=7ffeb04c1bfc items=0 ppid=2514 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.390000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:24:29.391000 audit[2560]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.391000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff69f825a0 a2=0 a3=7fff69f8258c items=0 ppid=2514 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.391000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:24:29.424775 containerd[1293]: time="2024-06-25T16:24:29.424706147Z" level=info msg="StartContainer for \"ef6bdf2a5802b70e208f93d9d752436a2ec7991b3be8a86b72a64c87ff5d7887\" returns successfully" Jun 25 16:24:29.488000 audit[2561]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.488000 audit[2561]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcc38ce570 a2=0 a3=7ffcc38ce55c items=0 ppid=2514 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.488000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:24:29.490000 audit[2563]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.490000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffff10a18a0 a2=0 a3=7ffff10a188c items=0 ppid=2514 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:24:29.494000 audit[2566]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.494000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffecdd8b900 a2=0 a3=7ffecdd8b8ec items=0 ppid=2514 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.494000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:24:29.495000 audit[2567]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.495000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9d2b9f80 a2=0 a3=7ffc9d2b9f6c items=0 ppid=2514 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.495000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:24:29.497000 audit[2569]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.497000 audit[2569]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7f18c0e0 a2=0 a3=7ffe7f18c0cc items=0 ppid=2514 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:24:29.498000 audit[2570]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.498000 audit[2570]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb0a30530 a2=0 a3=7fffb0a3051c items=0 ppid=2514 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:24:29.501000 audit[2572]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.501000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcf5390f20 a2=0 a3=7ffcf5390f0c items=0 ppid=2514 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:24:29.505000 audit[2575]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.505000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd9ece3980 a2=0 a3=7ffd9ece396c items=0 ppid=2514 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.505000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:24:29.506000 audit[2576]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.506000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe38aac4b0 a2=0 a3=7ffe38aac49c items=0 ppid=2514 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:24:29.508000 audit[2578]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.508000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2c4c1e60 a2=0 a3=7ffd2c4c1e4c items=0 ppid=2514 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.508000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:24:29.509000 audit[2579]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.509000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9d3c8040 a2=0 a3=7ffe9d3c802c items=0 ppid=2514 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:24:29.513000 audit[2581]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.513000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff800e7280 a2=0 a3=7fff800e726c items=0 ppid=2514 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:24:29.517000 audit[2584]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.517000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff30ae2ad0 a2=0 a3=7fff30ae2abc items=0 ppid=2514 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.517000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:24:29.521000 audit[2587]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.521000 audit[2587]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdae0c2c20 a2=0 a3=7ffdae0c2c0c items=0 ppid=2514 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.521000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:24:29.523000 audit[2588]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.523000 audit[2588]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc8bafd550 a2=0 a3=7ffc8bafd53c items=0 ppid=2514 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.523000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:24:29.525000 audit[2590]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.525000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdbd5eeb40 a2=0 a3=7ffdbd5eeb2c items=0 ppid=2514 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:24:29.529000 audit[2593]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.529000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffebf97ee60 a2=0 a3=7ffebf97ee4c items=0 ppid=2514 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.529000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:24:29.530000 audit[2594]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.530000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc5659d70 a2=0 a3=7ffdc5659d5c items=0 ppid=2514 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.530000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:24:29.533000 audit[2596]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:29.533000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdd15b9720 a2=0 a3=7ffdd15b970c items=0 ppid=2514 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.533000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:24:29.550000 audit[2602]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:29.550000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fffad30d670 a2=0 a3=7fffad30d65c items=0 ppid=2514 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.550000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:29.554000 audit[2602]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:29.554000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fffad30d670 a2=0 a3=7fffad30d65c items=0 ppid=2514 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:29.557000 audit[2609]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.557000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffc7d184e0 a2=0 a3=7fffc7d184cc items=0 ppid=2514 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.557000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:24:29.560000 audit[2611]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.560000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd9f59d1f0 a2=0 a3=7ffd9f59d1dc items=0 ppid=2514 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:24:29.564000 audit[2614]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.564000 audit[2614]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd2c5a6d50 a2=0 a3=7ffd2c5a6d3c items=0 ppid=2514 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.564000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:24:29.565000 audit[2615]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2615 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.565000 audit[2615]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebbecc6f0 a2=0 a3=7ffebbecc6dc items=0 ppid=2514 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.565000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:24:29.567000 audit[2617]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2617 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.567000 audit[2617]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffef4b21ef0 a2=0 a3=7ffef4b21edc items=0 ppid=2514 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:24:29.568000 audit[2618]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2618 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.568000 audit[2618]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa6eed400 a2=0 a3=7fffa6eed3ec items=0 ppid=2514 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.568000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:24:29.572000 audit[2620]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2620 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.572000 audit[2620]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffedb2970e0 a2=0 a3=7ffedb2970cc items=0 ppid=2514 pid=2620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.572000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:24:29.576000 audit[2623]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2623 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.576000 audit[2623]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffccdfc7b60 a2=0 a3=7ffccdfc7b4c items=0 ppid=2514 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.576000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:24:29.577000 audit[2624]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2624 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.577000 audit[2624]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd51afeb30 a2=0 a3=7ffd51afeb1c items=0 ppid=2514 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:24:29.579000 audit[2626]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2626 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.579000 audit[2626]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe38463410 a2=0 a3=7ffe384633fc items=0 ppid=2514 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.579000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:24:29.580000 audit[2627]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.580000 audit[2627]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff36641670 a2=0 a3=7fff3664165c items=0 ppid=2514 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:24:29.582000 audit[2629]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.582000 audit[2629]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6f182eb0 a2=0 a3=7ffe6f182e9c items=0 ppid=2514 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:24:29.586000 audit[2632]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.586000 audit[2632]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc87d13a20 a2=0 a3=7ffc87d13a0c items=0 ppid=2514 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.586000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:24:29.589000 audit[2635]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.589000 audit[2635]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda9f3fa20 a2=0 a3=7ffda9f3fa0c items=0 ppid=2514 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:24:29.590000 audit[2636]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2636 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.590000 audit[2636]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd1f111830 a2=0 a3=7ffd1f11181c items=0 ppid=2514 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:24:29.593000 audit[2638]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2638 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.593000 audit[2638]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe59e49750 a2=0 a3=7ffe59e4973c items=0 ppid=2514 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.593000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:24:29.596000 audit[2641]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2641 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.596000 audit[2641]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffedcc1b560 a2=0 a3=7ffedcc1b54c items=0 ppid=2514 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.596000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:24:29.597000 audit[2642]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2642 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.597000 audit[2642]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe192e4910 a2=0 a3=7ffe192e48fc items=0 ppid=2514 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.597000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:24:29.599000 audit[2644]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2644 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.599000 audit[2644]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe45c22c80 a2=0 a3=7ffe45c22c6c items=0 ppid=2514 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.599000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:24:29.601000 audit[2645]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2645 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.601000 audit[2645]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0b801060 a2=0 a3=7ffe0b80104c items=0 ppid=2514 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.601000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:24:29.602000 audit[2647]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2647 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.602000 audit[2647]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe62d751c0 a2=0 a3=7ffe62d751ac items=0 ppid=2514 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.602000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:24:29.605000 audit[2650]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2650 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:29.605000 audit[2650]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe80eda7b0 a2=0 a3=7ffe80eda79c items=0 ppid=2514 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.605000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:24:29.608000 audit[2652]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2652 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:24:29.608000 audit[2652]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7fffeed937e0 a2=0 a3=7fffeed937cc items=0 ppid=2514 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.608000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:29.608000 audit[2652]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2652 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:24:29.608000 audit[2652]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffeed937e0 a2=0 a3=7fffeed937cc items=0 ppid=2514 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:29.608000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:29.775397 kubelet[2308]: E0625 16:24:29.775272 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:29.948516 systemd[1]: run-containerd-runc-k8s.io-ef6bdf2a5802b70e208f93d9d752436a2ec7991b3be8a86b72a64c87ff5d7887-runc.tHk3Iw.mount: Deactivated successfully. Jun 25 16:24:30.286583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965954071.mount: Deactivated successfully. Jun 25 16:24:30.777152 kubelet[2308]: E0625 16:24:30.777111 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:31.690586 containerd[1293]: time="2024-06-25T16:24:31.690527090Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:31.692997 containerd[1293]: time="2024-06-25T16:24:31.692945621Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076056" Jun 25 16:24:31.697082 containerd[1293]: time="2024-06-25T16:24:31.697021912Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:31.699976 containerd[1293]: time="2024-06-25T16:24:31.699902462Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:31.702134 containerd[1293]: time="2024-06-25T16:24:31.702067265Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:31.703171 containerd[1293]: time="2024-06-25T16:24:31.703125956Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 3.266180249s" Jun 25 16:24:31.703239 containerd[1293]: time="2024-06-25T16:24:31.703168957Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:24:31.707459 containerd[1293]: time="2024-06-25T16:24:31.707417391Z" level=info msg="CreateContainer within sandbox \"fce6b9fb3deea56495840e1a59754cbcee443299ca5ea8b7dd89b1a15a028ccd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:24:31.730083 containerd[1293]: time="2024-06-25T16:24:31.730005873Z" level=info msg="CreateContainer within sandbox \"fce6b9fb3deea56495840e1a59754cbcee443299ca5ea8b7dd89b1a15a028ccd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4eea56bc69d8bdb894034a8f74cb6741d6183eeeb2a9529ea3d0521d9bf42fdd\"" Jun 25 16:24:31.730624 containerd[1293]: time="2024-06-25T16:24:31.730587047Z" level=info msg="StartContainer for \"4eea56bc69d8bdb894034a8f74cb6741d6183eeeb2a9529ea3d0521d9bf42fdd\"" Jun 25 16:24:31.755031 systemd[1]: Started cri-containerd-4eea56bc69d8bdb894034a8f74cb6741d6183eeeb2a9529ea3d0521d9bf42fdd.scope - libcontainer container 4eea56bc69d8bdb894034a8f74cb6741d6183eeeb2a9529ea3d0521d9bf42fdd. Jun 25 16:24:31.763000 audit: BPF prog-id=110 op=LOAD Jun 25 16:24:31.763000 audit: BPF prog-id=111 op=LOAD Jun 25 16:24:31.763000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2421 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465656135366263363964386264623839343033346138663734636236 Jun 25 16:24:31.763000 audit: BPF prog-id=112 op=LOAD Jun 25 16:24:31.763000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2421 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465656135366263363964386264623839343033346138663734636236 Jun 25 16:24:31.763000 audit: BPF prog-id=112 op=UNLOAD Jun 25 16:24:31.763000 audit: BPF prog-id=111 op=UNLOAD Jun 25 16:24:31.763000 audit: BPF prog-id=113 op=LOAD Jun 25 16:24:31.763000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2421 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:31.763000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465656135366263363964386264623839343033346138663734636236 Jun 25 16:24:31.857620 containerd[1293]: time="2024-06-25T16:24:31.857569916Z" level=info msg="StartContainer for \"4eea56bc69d8bdb894034a8f74cb6741d6183eeeb2a9529ea3d0521d9bf42fdd\" returns successfully" Jun 25 16:24:32.871223 kubelet[2308]: I0625 16:24:32.871170 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-8m8sx" podStartSLOduration=2.6012687530000003 podStartE2EDuration="5.871148905s" podCreationTimestamp="2024-06-25 16:24:27 +0000 UTC" firstStartedPulling="2024-06-25 16:24:28.43624068 +0000 UTC m=+15.780900713" lastFinishedPulling="2024-06-25 16:24:31.706120832 +0000 UTC m=+19.050780865" observedRunningTime="2024-06-25 16:24:32.869984655 +0000 UTC m=+20.214644688" watchObservedRunningTime="2024-06-25 16:24:32.871148905 +0000 UTC m=+20.215808938" Jun 25 16:24:32.871776 kubelet[2308]: I0625 16:24:32.871729 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7xwnc" podStartSLOduration=4.871721823 podStartE2EDuration="4.871721823s" podCreationTimestamp="2024-06-25 16:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:24:29.786328247 +0000 UTC m=+17.130988310" watchObservedRunningTime="2024-06-25 16:24:32.871721823 +0000 UTC m=+20.216381876" Jun 25 16:24:35.274000 audit[2704]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:35.277954 kernel: kauditd_printk_skb: 190 callbacks suppressed Jun 25 16:24:35.278055 kernel: audit: type=1325 audit(1719332675.274:446): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:35.274000 audit[2704]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffed360bd40 a2=0 a3=7ffed360bd2c items=0 ppid=2514 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.284502 kernel: audit: type=1300 audit(1719332675.274:446): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffed360bd40 a2=0 a3=7ffed360bd2c items=0 ppid=2514 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.284662 kernel: audit: type=1327 audit(1719332675.274:446): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:35.274000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:35.275000 audit[2704]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:35.275000 audit[2704]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed360bd40 a2=0 a3=0 items=0 ppid=2514 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.294350 kernel: audit: type=1325 audit(1719332675.275:447): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:35.294464 kernel: audit: type=1300 audit(1719332675.275:447): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed360bd40 a2=0 a3=0 items=0 ppid=2514 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.294509 kernel: audit: type=1327 audit(1719332675.275:447): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:35.275000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:35.292000 audit[2706]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:35.292000 audit[2706]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd34360250 a2=0 a3=7ffd3436023c items=0 ppid=2514 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.306187 kernel: audit: type=1325 audit(1719332675.292:448): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:35.306249 kernel: audit: type=1300 audit(1719332675.292:448): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd34360250 a2=0 a3=7ffd3436023c items=0 ppid=2514 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.306290 kernel: audit: type=1327 audit(1719332675.292:448): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:35.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:35.299000 audit[2706]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:35.299000 audit[2706]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd34360250 a2=0 a3=0 items=0 ppid=2514 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.299000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:35.312852 kernel: audit: type=1325 audit(1719332675.299:449): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:35.421372 kubelet[2308]: I0625 16:24:35.421304 2308 topology_manager.go:215] "Topology Admit Handler" podUID="95b5b5f0-42ff-4a1d-b22c-2e9cfb7fa290" podNamespace="calico-system" podName="calico-typha-549dcf549-q7bdw" Jun 25 16:24:35.428330 systemd[1]: Created slice kubepods-besteffort-pod95b5b5f0_42ff_4a1d_b22c_2e9cfb7fa290.slice - libcontainer container kubepods-besteffort-pod95b5b5f0_42ff_4a1d_b22c_2e9cfb7fa290.slice. Jun 25 16:24:35.476018 kubelet[2308]: I0625 16:24:35.475953 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/95b5b5f0-42ff-4a1d-b22c-2e9cfb7fa290-typha-certs\") pod \"calico-typha-549dcf549-q7bdw\" (UID: \"95b5b5f0-42ff-4a1d-b22c-2e9cfb7fa290\") " pod="calico-system/calico-typha-549dcf549-q7bdw" Jun 25 16:24:35.476018 kubelet[2308]: I0625 16:24:35.476005 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95b5b5f0-42ff-4a1d-b22c-2e9cfb7fa290-tigera-ca-bundle\") pod \"calico-typha-549dcf549-q7bdw\" (UID: \"95b5b5f0-42ff-4a1d-b22c-2e9cfb7fa290\") " pod="calico-system/calico-typha-549dcf549-q7bdw" Jun 25 16:24:35.476295 kubelet[2308]: I0625 16:24:35.476057 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjckt\" (UniqueName: \"kubernetes.io/projected/95b5b5f0-42ff-4a1d-b22c-2e9cfb7fa290-kube-api-access-kjckt\") pod \"calico-typha-549dcf549-q7bdw\" (UID: \"95b5b5f0-42ff-4a1d-b22c-2e9cfb7fa290\") " pod="calico-system/calico-typha-549dcf549-q7bdw" Jun 25 16:24:35.480359 kubelet[2308]: I0625 16:24:35.480310 2308 topology_manager.go:215] "Topology Admit Handler" podUID="c46319b9-0d24-471c-be81-5dfbc39dfc7e" podNamespace="calico-system" podName="calico-node-p6p47" Jun 25 16:24:35.490856 systemd[1]: Created slice kubepods-besteffort-podc46319b9_0d24_471c_be81_5dfbc39dfc7e.slice - libcontainer container kubepods-besteffort-podc46319b9_0d24_471c_be81_5dfbc39dfc7e.slice. Jun 25 16:24:35.576579 kubelet[2308]: I0625 16:24:35.576534 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-flexvol-driver-host\") pod \"calico-node-p6p47\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " pod="calico-system/calico-node-p6p47" Jun 25 16:24:35.576579 kubelet[2308]: I0625 16:24:35.576575 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq2gz\" (UniqueName: \"kubernetes.io/projected/c46319b9-0d24-471c-be81-5dfbc39dfc7e-kube-api-access-pq2gz\") pod \"calico-node-p6p47\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " pod="calico-system/calico-node-p6p47" Jun 25 16:24:35.576579 kubelet[2308]: I0625 16:24:35.576591 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-cni-bin-dir\") pod \"calico-node-p6p47\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " pod="calico-system/calico-node-p6p47" Jun 25 16:24:35.576799 kubelet[2308]: I0625 16:24:35.576610 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c46319b9-0d24-471c-be81-5dfbc39dfc7e-tigera-ca-bundle\") pod \"calico-node-p6p47\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " pod="calico-system/calico-node-p6p47" Jun 25 16:24:35.576799 kubelet[2308]: I0625 16:24:35.576629 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-var-run-calico\") pod \"calico-node-p6p47\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " pod="calico-system/calico-node-p6p47" Jun 25 16:24:35.576799 kubelet[2308]: I0625 16:24:35.576646 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-cni-log-dir\") pod \"calico-node-p6p47\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " pod="calico-system/calico-node-p6p47" Jun 25 16:24:35.576799 kubelet[2308]: I0625 16:24:35.576667 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-lib-modules\") pod \"calico-node-p6p47\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " pod="calico-system/calico-node-p6p47" Jun 25 16:24:35.576799 kubelet[2308]: I0625 16:24:35.576684 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-xtables-lock\") pod \"calico-node-p6p47\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " pod="calico-system/calico-node-p6p47" Jun 25 16:24:35.576946 kubelet[2308]: I0625 16:24:35.576698 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-cni-net-dir\") pod \"calico-node-p6p47\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " pod="calico-system/calico-node-p6p47" Jun 25 16:24:35.576946 kubelet[2308]: I0625 16:24:35.576723 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c46319b9-0d24-471c-be81-5dfbc39dfc7e-node-certs\") pod \"calico-node-p6p47\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " pod="calico-system/calico-node-p6p47" Jun 25 16:24:35.576946 kubelet[2308]: I0625 16:24:35.576747 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-policysync\") pod \"calico-node-p6p47\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " pod="calico-system/calico-node-p6p47" Jun 25 16:24:35.576946 kubelet[2308]: I0625 16:24:35.576760 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-var-lib-calico\") pod \"calico-node-p6p47\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " pod="calico-system/calico-node-p6p47" Jun 25 16:24:35.637486 kubelet[2308]: I0625 16:24:35.637441 2308 topology_manager.go:215] "Topology Admit Handler" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" podNamespace="calico-system" podName="csi-node-driver-bw7kv" Jun 25 16:24:35.638040 kubelet[2308]: E0625 16:24:35.638017 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:24:35.677425 kubelet[2308]: I0625 16:24:35.677365 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66r5q\" (UniqueName: \"kubernetes.io/projected/a82ce7d0-b43c-4d81-ae9f-10974dd66ff7-kube-api-access-66r5q\") pod \"csi-node-driver-bw7kv\" (UID: \"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7\") " pod="calico-system/csi-node-driver-bw7kv" Jun 25 16:24:35.677425 kubelet[2308]: I0625 16:24:35.677426 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a82ce7d0-b43c-4d81-ae9f-10974dd66ff7-registration-dir\") pod \"csi-node-driver-bw7kv\" (UID: \"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7\") " pod="calico-system/csi-node-driver-bw7kv" Jun 25 16:24:35.677723 kubelet[2308]: I0625 16:24:35.677507 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a82ce7d0-b43c-4d81-ae9f-10974dd66ff7-varrun\") pod \"csi-node-driver-bw7kv\" (UID: \"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7\") " pod="calico-system/csi-node-driver-bw7kv" Jun 25 16:24:35.677723 kubelet[2308]: I0625 16:24:35.677528 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a82ce7d0-b43c-4d81-ae9f-10974dd66ff7-kubelet-dir\") pod \"csi-node-driver-bw7kv\" (UID: \"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7\") " pod="calico-system/csi-node-driver-bw7kv" Jun 25 16:24:35.677723 kubelet[2308]: I0625 16:24:35.677549 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a82ce7d0-b43c-4d81-ae9f-10974dd66ff7-socket-dir\") pod \"csi-node-driver-bw7kv\" (UID: \"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7\") " pod="calico-system/csi-node-driver-bw7kv" Jun 25 16:24:35.681831 kubelet[2308]: E0625 16:24:35.681769 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.681831 kubelet[2308]: W0625 16:24:35.681797 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.681831 kubelet[2308]: E0625 16:24:35.681819 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.691683 kubelet[2308]: E0625 16:24:35.691636 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.691683 kubelet[2308]: W0625 16:24:35.691683 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.691683 kubelet[2308]: E0625 16:24:35.691699 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.734041 kubelet[2308]: E0625 16:24:35.733997 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:35.734727 containerd[1293]: time="2024-06-25T16:24:35.734667525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-549dcf549-q7bdw,Uid:95b5b5f0-42ff-4a1d-b22c-2e9cfb7fa290,Namespace:calico-system,Attempt:0,}" Jun 25 16:24:35.765228 containerd[1293]: time="2024-06-25T16:24:35.763759538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:35.765228 containerd[1293]: time="2024-06-25T16:24:35.763851882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:35.765228 containerd[1293]: time="2024-06-25T16:24:35.763885846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:35.765228 containerd[1293]: time="2024-06-25T16:24:35.763898540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:35.779190 kubelet[2308]: E0625 16:24:35.779149 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.779190 kubelet[2308]: W0625 16:24:35.779169 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.779190 kubelet[2308]: E0625 16:24:35.779188 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.779432 kubelet[2308]: E0625 16:24:35.779414 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.779432 kubelet[2308]: W0625 16:24:35.779427 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.779501 kubelet[2308]: E0625 16:24:35.779454 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.779658 kubelet[2308]: E0625 16:24:35.779640 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.779658 kubelet[2308]: W0625 16:24:35.779652 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.779658 kubelet[2308]: E0625 16:24:35.779660 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.779950 kubelet[2308]: E0625 16:24:35.779923 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.779950 kubelet[2308]: W0625 16:24:35.779937 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.779950 kubelet[2308]: E0625 16:24:35.779945 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.780248 kubelet[2308]: E0625 16:24:35.780226 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.780248 kubelet[2308]: W0625 16:24:35.780241 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.780337 kubelet[2308]: E0625 16:24:35.780253 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.780864 kubelet[2308]: E0625 16:24:35.780840 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.780864 kubelet[2308]: W0625 16:24:35.780851 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.780864 kubelet[2308]: E0625 16:24:35.780859 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.781074 kubelet[2308]: E0625 16:24:35.781042 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.781123 kubelet[2308]: W0625 16:24:35.781081 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.781184 kubelet[2308]: E0625 16:24:35.781158 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.781259 kubelet[2308]: E0625 16:24:35.781241 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.781259 kubelet[2308]: W0625 16:24:35.781253 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.781344 kubelet[2308]: E0625 16:24:35.781315 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.781464 kubelet[2308]: E0625 16:24:35.781428 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.781464 kubelet[2308]: W0625 16:24:35.781438 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.781540 kubelet[2308]: E0625 16:24:35.781517 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.781602 kubelet[2308]: E0625 16:24:35.781585 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.781602 kubelet[2308]: W0625 16:24:35.781598 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.781681 kubelet[2308]: E0625 16:24:35.781675 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.781757 kubelet[2308]: E0625 16:24:35.781739 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.781757 kubelet[2308]: W0625 16:24:35.781751 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.781857 kubelet[2308]: E0625 16:24:35.781850 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.781954 kubelet[2308]: E0625 16:24:35.781935 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.781954 kubelet[2308]: W0625 16:24:35.781947 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.781954 kubelet[2308]: E0625 16:24:35.781957 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.782353 kubelet[2308]: E0625 16:24:35.782327 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.782353 kubelet[2308]: W0625 16:24:35.782342 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.782441 kubelet[2308]: E0625 16:24:35.782357 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.782637 kubelet[2308]: E0625 16:24:35.782597 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.782637 kubelet[2308]: W0625 16:24:35.782611 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.782731 kubelet[2308]: E0625 16:24:35.782641 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.783062 systemd[1]: Started cri-containerd-912358f5ca1c833e6b26f03f4668fc390b044b4c8cd23b8ae6191f4d2ace5b3e.scope - libcontainer container 912358f5ca1c833e6b26f03f4668fc390b044b4c8cd23b8ae6191f4d2ace5b3e. Jun 25 16:24:35.783432 kubelet[2308]: E0625 16:24:35.783417 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.783432 kubelet[2308]: W0625 16:24:35.783429 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.783522 kubelet[2308]: E0625 16:24:35.783463 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.784092 kubelet[2308]: E0625 16:24:35.784064 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.784187 kubelet[2308]: W0625 16:24:35.784168 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.784363 kubelet[2308]: E0625 16:24:35.784350 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.784673 kubelet[2308]: E0625 16:24:35.784651 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.784757 kubelet[2308]: W0625 16:24:35.784745 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.784913 kubelet[2308]: E0625 16:24:35.784901 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.785752 kubelet[2308]: E0625 16:24:35.785727 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.785908 kubelet[2308]: W0625 16:24:35.785895 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.786110 kubelet[2308]: E0625 16:24:35.786084 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.787941 kubelet[2308]: E0625 16:24:35.787923 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.788074 kubelet[2308]: W0625 16:24:35.788063 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.788206 kubelet[2308]: E0625 16:24:35.788037 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:35.788448 kubelet[2308]: E0625 16:24:35.788431 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.788918 kubelet[2308]: E0625 16:24:35.788902 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.788988 kubelet[2308]: W0625 16:24:35.788979 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.789086 kubelet[2308]: E0625 16:24:35.789077 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.789323 kubelet[2308]: E0625 16:24:35.789314 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.789388 kubelet[2308]: W0625 16:24:35.789379 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.789519 kubelet[2308]: E0625 16:24:35.789509 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.789635 kubelet[2308]: E0625 16:24:35.789628 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.789699 kubelet[2308]: W0625 16:24:35.789691 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.789820 kubelet[2308]: E0625 16:24:35.789810 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.789867 containerd[1293]: time="2024-06-25T16:24:35.789817627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p6p47,Uid:c46319b9-0d24-471c-be81-5dfbc39dfc7e,Namespace:calico-system,Attempt:0,}" Jun 25 16:24:35.790116 kubelet[2308]: E0625 16:24:35.790106 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.790257 kubelet[2308]: W0625 16:24:35.790247 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.790344 kubelet[2308]: E0625 16:24:35.790334 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.790572 kubelet[2308]: E0625 16:24:35.790563 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.790640 kubelet[2308]: W0625 16:24:35.790631 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.790704 kubelet[2308]: E0625 16:24:35.790695 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.791002 kubelet[2308]: E0625 16:24:35.790975 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.791081 kubelet[2308]: W0625 16:24:35.791071 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.791141 kubelet[2308]: E0625 16:24:35.791131 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.795045 kubelet[2308]: E0625 16:24:35.795000 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:35.795045 kubelet[2308]: W0625 16:24:35.795021 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:35.795045 kubelet[2308]: E0625 16:24:35.795038 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:35.797000 audit: BPF prog-id=114 op=LOAD Jun 25 16:24:35.798000 audit: BPF prog-id=115 op=LOAD Jun 25 16:24:35.798000 audit[2732]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2721 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931323335386635636131633833336536623236663033663436363866 Jun 25 16:24:35.798000 audit: BPF prog-id=116 op=LOAD Jun 25 16:24:35.798000 audit[2732]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2721 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931323335386635636131633833336536623236663033663436363866 Jun 25 16:24:35.798000 audit: BPF prog-id=116 op=UNLOAD Jun 25 16:24:35.798000 audit: BPF prog-id=115 op=UNLOAD Jun 25 16:24:35.798000 audit: BPF prog-id=117 op=LOAD Jun 25 16:24:35.798000 audit[2732]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2721 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931323335386635636131633833336536623236663033663436363866 Jun 25 16:24:35.827365 containerd[1293]: time="2024-06-25T16:24:35.827239240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-549dcf549-q7bdw,Uid:95b5b5f0-42ff-4a1d-b22c-2e9cfb7fa290,Namespace:calico-system,Attempt:0,} returns sandbox id \"912358f5ca1c833e6b26f03f4668fc390b044b4c8cd23b8ae6191f4d2ace5b3e\"" Jun 25 16:24:35.828057 kubelet[2308]: E0625 16:24:35.828035 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:35.831819 containerd[1293]: time="2024-06-25T16:24:35.831745964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:24:35.843410 containerd[1293]: time="2024-06-25T16:24:35.843300256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:35.843410 containerd[1293]: time="2024-06-25T16:24:35.843378162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:35.843410 containerd[1293]: time="2024-06-25T16:24:35.843405543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:35.843605 containerd[1293]: time="2024-06-25T16:24:35.843427043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:35.867988 systemd[1]: Started cri-containerd-93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f.scope - libcontainer container 93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f. Jun 25 16:24:35.874000 audit: BPF prog-id=118 op=LOAD Jun 25 16:24:35.874000 audit: BPF prog-id=119 op=LOAD Jun 25 16:24:35.874000 audit[2800]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2790 pid=2800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663533353234373739303836383965633264303530363233333865 Jun 25 16:24:35.874000 audit: BPF prog-id=120 op=LOAD Jun 25 16:24:35.874000 audit[2800]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2790 pid=2800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663533353234373739303836383965633264303530363233333865 Jun 25 16:24:35.874000 audit: BPF prog-id=120 op=UNLOAD Jun 25 16:24:35.874000 audit: BPF prog-id=119 op=UNLOAD Jun 25 16:24:35.874000 audit: BPF prog-id=121 op=LOAD Jun 25 16:24:35.874000 audit[2800]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2790 pid=2800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:35.874000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933663533353234373739303836383965633264303530363233333865 Jun 25 16:24:35.885581 containerd[1293]: time="2024-06-25T16:24:35.885518607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p6p47,Uid:c46319b9-0d24-471c-be81-5dfbc39dfc7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\"" Jun 25 16:24:35.886327 kubelet[2308]: E0625 16:24:35.886295 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:36.313000 audit[2825]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2825 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:36.313000 audit[2825]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe3a864b70 a2=0 a3=7ffe3a864b5c items=0 ppid=2514 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:36.313000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:36.314000 audit[2825]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2825 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:24:36.314000 audit[2825]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe3a864b70 a2=0 a3=0 items=0 ppid=2514 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:36.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:24:37.734117 kubelet[2308]: E0625 16:24:37.734073 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:24:38.051590 containerd[1293]: time="2024-06-25T16:24:38.051418290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:38.093846 containerd[1293]: time="2024-06-25T16:24:38.093717253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:24:38.127864 containerd[1293]: time="2024-06-25T16:24:38.127797903Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:38.170403 containerd[1293]: time="2024-06-25T16:24:38.170272996Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:38.183136 containerd[1293]: time="2024-06-25T16:24:38.183061456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:38.183873 containerd[1293]: time="2024-06-25T16:24:38.183819481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.351990451s" Jun 25 16:24:38.183930 containerd[1293]: time="2024-06-25T16:24:38.183872961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:24:38.184944 containerd[1293]: time="2024-06-25T16:24:38.184922584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:24:38.192375 containerd[1293]: time="2024-06-25T16:24:38.190838680Z" level=info msg="CreateContainer within sandbox \"912358f5ca1c833e6b26f03f4668fc390b044b4c8cd23b8ae6191f4d2ace5b3e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:24:38.364765 containerd[1293]: time="2024-06-25T16:24:38.364672063Z" level=info msg="CreateContainer within sandbox \"912358f5ca1c833e6b26f03f4668fc390b044b4c8cd23b8ae6191f4d2ace5b3e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4bf85133a2f7d0618fee544541b00eeef58eae10f848afdb5c50104bd8f9c3a9\"" Jun 25 16:24:38.365296 containerd[1293]: time="2024-06-25T16:24:38.365248006Z" level=info msg="StartContainer for \"4bf85133a2f7d0618fee544541b00eeef58eae10f848afdb5c50104bd8f9c3a9\"" Jun 25 16:24:38.393024 systemd[1]: Started cri-containerd-4bf85133a2f7d0618fee544541b00eeef58eae10f848afdb5c50104bd8f9c3a9.scope - libcontainer container 4bf85133a2f7d0618fee544541b00eeef58eae10f848afdb5c50104bd8f9c3a9. Jun 25 16:24:38.404000 audit: BPF prog-id=122 op=LOAD Jun 25 16:24:38.404000 audit: BPF prog-id=123 op=LOAD Jun 25 16:24:38.404000 audit[2840]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2721 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462663835313333613266376430363138666565353434353431623030 Jun 25 16:24:38.404000 audit: BPF prog-id=124 op=LOAD Jun 25 16:24:38.404000 audit[2840]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2721 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462663835313333613266376430363138666565353434353431623030 Jun 25 16:24:38.404000 audit: BPF prog-id=124 op=UNLOAD Jun 25 16:24:38.405000 audit: BPF prog-id=123 op=UNLOAD Jun 25 16:24:38.405000 audit: BPF prog-id=125 op=LOAD Jun 25 16:24:38.405000 audit[2840]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2721 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:38.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462663835313333613266376430363138666565353434353431623030 Jun 25 16:24:38.441001 containerd[1293]: time="2024-06-25T16:24:38.440945616Z" level=info msg="StartContainer for \"4bf85133a2f7d0618fee544541b00eeef58eae10f848afdb5c50104bd8f9c3a9\" returns successfully" Jun 25 16:24:38.875772 kubelet[2308]: E0625 16:24:38.875739 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:38.887988 kubelet[2308]: I0625 16:24:38.887740 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-549dcf549-q7bdw" podStartSLOduration=1.534388482 podStartE2EDuration="3.887720053s" podCreationTimestamp="2024-06-25 16:24:35 +0000 UTC" firstStartedPulling="2024-06-25 16:24:35.831366039 +0000 UTC m=+23.176026063" lastFinishedPulling="2024-06-25 16:24:38.184697601 +0000 UTC m=+25.529357634" observedRunningTime="2024-06-25 16:24:38.886514298 +0000 UTC m=+26.231174331" watchObservedRunningTime="2024-06-25 16:24:38.887720053 +0000 UTC m=+26.232380086" Jun 25 16:24:38.895779 kubelet[2308]: E0625 16:24:38.895719 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.895779 kubelet[2308]: W0625 16:24:38.895748 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.895779 kubelet[2308]: E0625 16:24:38.895771 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.896051 kubelet[2308]: E0625 16:24:38.895992 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.896051 kubelet[2308]: W0625 16:24:38.896003 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.896051 kubelet[2308]: E0625 16:24:38.896014 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.896239 kubelet[2308]: E0625 16:24:38.896212 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.896239 kubelet[2308]: W0625 16:24:38.896227 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.896239 kubelet[2308]: E0625 16:24:38.896236 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.896524 kubelet[2308]: E0625 16:24:38.896502 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.896524 kubelet[2308]: W0625 16:24:38.896515 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.896524 kubelet[2308]: E0625 16:24:38.896524 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.896730 kubelet[2308]: E0625 16:24:38.896710 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.896730 kubelet[2308]: W0625 16:24:38.896722 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.896791 kubelet[2308]: E0625 16:24:38.896735 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.896954 kubelet[2308]: E0625 16:24:38.896932 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.896954 kubelet[2308]: W0625 16:24:38.896945 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.896954 kubelet[2308]: E0625 16:24:38.896953 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.897179 kubelet[2308]: E0625 16:24:38.897155 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.897179 kubelet[2308]: W0625 16:24:38.897168 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.897179 kubelet[2308]: E0625 16:24:38.897177 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.897386 kubelet[2308]: E0625 16:24:38.897371 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.897386 kubelet[2308]: W0625 16:24:38.897382 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.897453 kubelet[2308]: E0625 16:24:38.897392 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.897550 kubelet[2308]: E0625 16:24:38.897535 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.897550 kubelet[2308]: W0625 16:24:38.897547 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.897614 kubelet[2308]: E0625 16:24:38.897556 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.897703 kubelet[2308]: E0625 16:24:38.897689 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.897703 kubelet[2308]: W0625 16:24:38.897701 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.897769 kubelet[2308]: E0625 16:24:38.897710 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.897873 kubelet[2308]: E0625 16:24:38.897859 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.897873 kubelet[2308]: W0625 16:24:38.897871 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.897938 kubelet[2308]: E0625 16:24:38.897879 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.898064 kubelet[2308]: E0625 16:24:38.898049 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.898064 kubelet[2308]: W0625 16:24:38.898062 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.898132 kubelet[2308]: E0625 16:24:38.898070 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.898247 kubelet[2308]: E0625 16:24:38.898232 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.898247 kubelet[2308]: W0625 16:24:38.898244 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.898449 kubelet[2308]: E0625 16:24:38.898253 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.898500 kubelet[2308]: E0625 16:24:38.898469 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.898544 kubelet[2308]: W0625 16:24:38.898501 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.898544 kubelet[2308]: E0625 16:24:38.898530 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.898767 kubelet[2308]: E0625 16:24:38.898753 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.898767 kubelet[2308]: W0625 16:24:38.898764 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.898850 kubelet[2308]: E0625 16:24:38.898774 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.901156 kubelet[2308]: E0625 16:24:38.901125 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.901156 kubelet[2308]: W0625 16:24:38.901142 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.901156 kubelet[2308]: E0625 16:24:38.901153 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.901388 kubelet[2308]: E0625 16:24:38.901364 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.901388 kubelet[2308]: W0625 16:24:38.901377 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.901473 kubelet[2308]: E0625 16:24:38.901392 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.901597 kubelet[2308]: E0625 16:24:38.901573 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.901597 kubelet[2308]: W0625 16:24:38.901589 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.901597 kubelet[2308]: E0625 16:24:38.901605 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.901952 kubelet[2308]: E0625 16:24:38.901914 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.901952 kubelet[2308]: W0625 16:24:38.901947 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.902036 kubelet[2308]: E0625 16:24:38.901982 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.902549 kubelet[2308]: E0625 16:24:38.902509 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.902549 kubelet[2308]: W0625 16:24:38.902526 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.902549 kubelet[2308]: E0625 16:24:38.902542 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.902741 kubelet[2308]: E0625 16:24:38.902719 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.902741 kubelet[2308]: W0625 16:24:38.902729 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.902811 kubelet[2308]: E0625 16:24:38.902763 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.902950 kubelet[2308]: E0625 16:24:38.902907 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.902950 kubelet[2308]: W0625 16:24:38.902924 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.903037 kubelet[2308]: E0625 16:24:38.902968 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.903230 kubelet[2308]: E0625 16:24:38.903135 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.903230 kubelet[2308]: W0625 16:24:38.903151 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.903230 kubelet[2308]: E0625 16:24:38.903188 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.903484 kubelet[2308]: E0625 16:24:38.903412 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.903484 kubelet[2308]: W0625 16:24:38.903422 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.903484 kubelet[2308]: E0625 16:24:38.903436 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.903734 kubelet[2308]: E0625 16:24:38.903707 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.903734 kubelet[2308]: W0625 16:24:38.903724 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.903797 kubelet[2308]: E0625 16:24:38.903741 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.903961 kubelet[2308]: E0625 16:24:38.903942 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.903961 kubelet[2308]: W0625 16:24:38.903954 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.904022 kubelet[2308]: E0625 16:24:38.903966 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.904228 kubelet[2308]: E0625 16:24:38.904209 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.904228 kubelet[2308]: W0625 16:24:38.904229 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.904301 kubelet[2308]: E0625 16:24:38.904248 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.904446 kubelet[2308]: E0625 16:24:38.904431 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.904446 kubelet[2308]: W0625 16:24:38.904446 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.904500 kubelet[2308]: E0625 16:24:38.904456 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.904706 kubelet[2308]: E0625 16:24:38.904687 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.904766 kubelet[2308]: W0625 16:24:38.904705 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.904766 kubelet[2308]: E0625 16:24:38.904721 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.905042 kubelet[2308]: E0625 16:24:38.905022 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.905042 kubelet[2308]: W0625 16:24:38.905037 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.905155 kubelet[2308]: E0625 16:24:38.905051 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.905272 kubelet[2308]: E0625 16:24:38.905258 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.905272 kubelet[2308]: W0625 16:24:38.905270 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.905342 kubelet[2308]: E0625 16:24:38.905283 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.905721 kubelet[2308]: E0625 16:24:38.905708 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.905721 kubelet[2308]: W0625 16:24:38.905719 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.905795 kubelet[2308]: E0625 16:24:38.905730 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:38.906081 kubelet[2308]: E0625 16:24:38.906044 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:38.906081 kubelet[2308]: W0625 16:24:38.906063 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:38.906081 kubelet[2308]: E0625 16:24:38.906075 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.734248 kubelet[2308]: E0625 16:24:39.734196 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:24:39.877188 kubelet[2308]: I0625 16:24:39.877132 2308 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:24:39.878254 kubelet[2308]: E0625 16:24:39.877767 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:39.905812 kubelet[2308]: E0625 16:24:39.905720 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.905812 kubelet[2308]: W0625 16:24:39.905759 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.905812 kubelet[2308]: E0625 16:24:39.905784 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.906117 kubelet[2308]: E0625 16:24:39.906093 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.906152 kubelet[2308]: W0625 16:24:39.906118 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.906206 kubelet[2308]: E0625 16:24:39.906147 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.906634 kubelet[2308]: E0625 16:24:39.906608 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.906634 kubelet[2308]: W0625 16:24:39.906621 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.906715 kubelet[2308]: E0625 16:24:39.906636 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.906867 kubelet[2308]: E0625 16:24:39.906852 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.906867 kubelet[2308]: W0625 16:24:39.906863 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.906955 kubelet[2308]: E0625 16:24:39.906874 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.907096 kubelet[2308]: E0625 16:24:39.907075 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.907096 kubelet[2308]: W0625 16:24:39.907087 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.907096 kubelet[2308]: E0625 16:24:39.907096 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.907363 kubelet[2308]: E0625 16:24:39.907336 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.907363 kubelet[2308]: W0625 16:24:39.907352 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.907446 kubelet[2308]: E0625 16:24:39.907364 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.907567 kubelet[2308]: E0625 16:24:39.907551 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.907567 kubelet[2308]: W0625 16:24:39.907562 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.907567 kubelet[2308]: E0625 16:24:39.907571 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.907758 kubelet[2308]: E0625 16:24:39.907742 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.907758 kubelet[2308]: W0625 16:24:39.907754 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.907848 kubelet[2308]: E0625 16:24:39.907763 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.908157 kubelet[2308]: E0625 16:24:39.907992 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.908157 kubelet[2308]: W0625 16:24:39.908006 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.908157 kubelet[2308]: E0625 16:24:39.908016 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.908278 kubelet[2308]: E0625 16:24:39.908196 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.908278 kubelet[2308]: W0625 16:24:39.908205 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.908278 kubelet[2308]: E0625 16:24:39.908216 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.908401 kubelet[2308]: E0625 16:24:39.908387 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.908401 kubelet[2308]: W0625 16:24:39.908399 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.908469 kubelet[2308]: E0625 16:24:39.908408 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.908656 kubelet[2308]: E0625 16:24:39.908578 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.908656 kubelet[2308]: W0625 16:24:39.908591 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.908656 kubelet[2308]: E0625 16:24:39.908599 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.908858 kubelet[2308]: E0625 16:24:39.908842 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.908858 kubelet[2308]: W0625 16:24:39.908855 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.908932 kubelet[2308]: E0625 16:24:39.908865 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.909070 kubelet[2308]: E0625 16:24:39.909039 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.909070 kubelet[2308]: W0625 16:24:39.909052 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.909070 kubelet[2308]: E0625 16:24:39.909060 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.909254 kubelet[2308]: E0625 16:24:39.909238 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.909254 kubelet[2308]: W0625 16:24:39.909250 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.909322 kubelet[2308]: E0625 16:24:39.909259 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.909519 kubelet[2308]: E0625 16:24:39.909495 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.909519 kubelet[2308]: W0625 16:24:39.909507 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.909519 kubelet[2308]: E0625 16:24:39.909516 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.909809 kubelet[2308]: E0625 16:24:39.909791 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.909809 kubelet[2308]: W0625 16:24:39.909803 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.909913 kubelet[2308]: E0625 16:24:39.909818 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.910073 kubelet[2308]: E0625 16:24:39.910056 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.910073 kubelet[2308]: W0625 16:24:39.910071 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.910150 kubelet[2308]: E0625 16:24:39.910090 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.910315 kubelet[2308]: E0625 16:24:39.910302 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.910315 kubelet[2308]: W0625 16:24:39.910313 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.910384 kubelet[2308]: E0625 16:24:39.910328 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.910520 kubelet[2308]: E0625 16:24:39.910506 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.910520 kubelet[2308]: W0625 16:24:39.910517 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.910599 kubelet[2308]: E0625 16:24:39.910532 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.910837 kubelet[2308]: E0625 16:24:39.910788 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.910883 kubelet[2308]: W0625 16:24:39.910840 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.910912 kubelet[2308]: E0625 16:24:39.910879 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.911169 kubelet[2308]: E0625 16:24:39.911152 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.911169 kubelet[2308]: W0625 16:24:39.911168 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.911292 kubelet[2308]: E0625 16:24:39.911201 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.911482 kubelet[2308]: E0625 16:24:39.911464 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.911482 kubelet[2308]: W0625 16:24:39.911479 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.911545 kubelet[2308]: E0625 16:24:39.911499 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.911708 kubelet[2308]: E0625 16:24:39.911693 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.911708 kubelet[2308]: W0625 16:24:39.911705 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.911783 kubelet[2308]: E0625 16:24:39.911721 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.911954 kubelet[2308]: E0625 16:24:39.911939 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.911954 kubelet[2308]: W0625 16:24:39.911951 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.912021 kubelet[2308]: E0625 16:24:39.911966 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.912242 kubelet[2308]: E0625 16:24:39.912214 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.912242 kubelet[2308]: W0625 16:24:39.912229 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.912242 kubelet[2308]: E0625 16:24:39.912250 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.912604 kubelet[2308]: E0625 16:24:39.912588 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.912604 kubelet[2308]: W0625 16:24:39.912600 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.912681 kubelet[2308]: E0625 16:24:39.912635 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.912777 kubelet[2308]: E0625 16:24:39.912760 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.912777 kubelet[2308]: W0625 16:24:39.912773 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.912902 kubelet[2308]: E0625 16:24:39.912802 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.913012 kubelet[2308]: E0625 16:24:39.912995 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.913012 kubelet[2308]: W0625 16:24:39.913008 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.913092 kubelet[2308]: E0625 16:24:39.913023 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.913236 kubelet[2308]: E0625 16:24:39.913218 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.913236 kubelet[2308]: W0625 16:24:39.913230 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.913330 kubelet[2308]: E0625 16:24:39.913239 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.913413 kubelet[2308]: E0625 16:24:39.913395 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.913413 kubelet[2308]: W0625 16:24:39.913407 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.913491 kubelet[2308]: E0625 16:24:39.913416 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.913597 kubelet[2308]: E0625 16:24:39.913580 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.913597 kubelet[2308]: W0625 16:24:39.913593 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.913686 kubelet[2308]: E0625 16:24:39.913602 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.913970 kubelet[2308]: E0625 16:24:39.913950 2308 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:24:39.913970 kubelet[2308]: W0625 16:24:39.913963 2308 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:24:39.914053 kubelet[2308]: E0625 16:24:39.913973 2308 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:24:39.974973 containerd[1293]: time="2024-06-25T16:24:39.974701601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:39.984415 containerd[1293]: time="2024-06-25T16:24:39.984231455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:24:40.009153 containerd[1293]: time="2024-06-25T16:24:40.009087502Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:40.018091 containerd[1293]: time="2024-06-25T16:24:40.018026775Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:40.046435 containerd[1293]: time="2024-06-25T16:24:40.046318652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:40.047391 containerd[1293]: time="2024-06-25T16:24:40.047319943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.862364227s" Jun 25 16:24:40.047391 containerd[1293]: time="2024-06-25T16:24:40.047378863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:24:40.049499 containerd[1293]: time="2024-06-25T16:24:40.049452469Z" level=info msg="CreateContainer within sandbox \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:24:40.373989 containerd[1293]: time="2024-06-25T16:24:40.373921733Z" level=info msg="CreateContainer within sandbox \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4\"" Jun 25 16:24:40.374648 containerd[1293]: time="2024-06-25T16:24:40.374611679Z" level=info msg="StartContainer for \"f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4\"" Jun 25 16:24:40.405104 systemd[1]: Started cri-containerd-f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4.scope - libcontainer container f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4. Jun 25 16:24:40.417000 audit: BPF prog-id=126 op=LOAD Jun 25 16:24:40.420501 kernel: kauditd_printk_skb: 44 callbacks suppressed Jun 25 16:24:40.420558 kernel: audit: type=1334 audit(1719332680.417:470): prog-id=126 op=LOAD Jun 25 16:24:40.417000 audit[2949]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2790 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:40.431790 kernel: audit: type=1300 audit(1719332680.417:470): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2790 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:40.431956 kernel: audit: type=1327 audit(1719332680.417:470): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633336362373334393735393037623234653262623939373237633736 Jun 25 16:24:40.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633336362373334393735393037623234653262623939373237633736 Jun 25 16:24:40.418000 audit: BPF prog-id=127 op=LOAD Jun 25 16:24:40.418000 audit[2949]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2790 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:40.438044 kernel: audit: type=1334 audit(1719332680.418:471): prog-id=127 op=LOAD Jun 25 16:24:40.438125 kernel: audit: type=1300 audit(1719332680.418:471): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2790 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:40.438201 kernel: audit: type=1327 audit(1719332680.418:471): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633336362373334393735393037623234653262623939373237633736 Jun 25 16:24:40.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633336362373334393735393037623234653262623939373237633736 Jun 25 16:24:40.442924 kernel: audit: type=1334 audit(1719332680.418:472): prog-id=127 op=UNLOAD Jun 25 16:24:40.418000 audit: BPF prog-id=127 op=UNLOAD Jun 25 16:24:40.418000 audit: BPF prog-id=126 op=UNLOAD Jun 25 16:24:40.445502 kernel: audit: type=1334 audit(1719332680.418:473): prog-id=126 op=UNLOAD Jun 25 16:24:40.446975 kernel: audit: type=1334 audit(1719332680.418:474): prog-id=128 op=LOAD Jun 25 16:24:40.452309 kernel: audit: type=1300 audit(1719332680.418:474): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2790 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:40.418000 audit: BPF prog-id=128 op=LOAD Jun 25 16:24:40.418000 audit[2949]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2790 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:40.452549 containerd[1293]: time="2024-06-25T16:24:40.450172592Z" level=info msg="StartContainer for \"f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4\" returns successfully" Jun 25 16:24:40.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633336362373334393735393037623234653262623939373237633736 Jun 25 16:24:40.460034 systemd[1]: cri-containerd-f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4.scope: Deactivated successfully. Jun 25 16:24:40.464000 audit: BPF prog-id=128 op=UNLOAD Jun 25 16:24:40.495448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4-rootfs.mount: Deactivated successfully. Jun 25 16:24:40.722805 containerd[1293]: time="2024-06-25T16:24:40.722637204Z" level=info msg="shim disconnected" id=f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4 namespace=k8s.io Jun 25 16:24:40.722805 containerd[1293]: time="2024-06-25T16:24:40.722710382Z" level=warning msg="cleaning up after shim disconnected" id=f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4 namespace=k8s.io Jun 25 16:24:40.722805 containerd[1293]: time="2024-06-25T16:24:40.722719719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:24:40.880199 containerd[1293]: time="2024-06-25T16:24:40.880139967Z" level=info msg="StopPodSandbox for \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\"" Jun 25 16:24:40.880388 containerd[1293]: time="2024-06-25T16:24:40.880222082Z" level=info msg="Container to stop \"f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 16:24:40.882480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f-shm.mount: Deactivated successfully. Jun 25 16:24:40.887440 systemd[1]: cri-containerd-93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f.scope: Deactivated successfully. Jun 25 16:24:40.885000 audit: BPF prog-id=118 op=UNLOAD Jun 25 16:24:40.890000 audit: BPF prog-id=121 op=UNLOAD Jun 25 16:24:40.915486 containerd[1293]: time="2024-06-25T16:24:40.915396487Z" level=info msg="shim disconnected" id=93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f namespace=k8s.io Jun 25 16:24:40.915486 containerd[1293]: time="2024-06-25T16:24:40.915467982Z" level=warning msg="cleaning up after shim disconnected" id=93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f namespace=k8s.io Jun 25 16:24:40.915486 containerd[1293]: time="2024-06-25T16:24:40.915478822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:24:40.930276 containerd[1293]: time="2024-06-25T16:24:40.930185399Z" level=warning msg="cleanup warnings time=\"2024-06-25T16:24:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 16:24:40.931947 containerd[1293]: time="2024-06-25T16:24:40.931457799Z" level=info msg="TearDown network for sandbox \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\" successfully" Jun 25 16:24:40.931947 containerd[1293]: time="2024-06-25T16:24:40.931488557Z" level=info msg="StopPodSandbox for \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\" returns successfully" Jun 25 16:24:41.034424 kubelet[2308]: I0625 16:24:41.034112 2308 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq2gz\" (UniqueName: \"kubernetes.io/projected/c46319b9-0d24-471c-be81-5dfbc39dfc7e-kube-api-access-pq2gz\") pod \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " Jun 25 16:24:41.034424 kubelet[2308]: I0625 16:24:41.034185 2308 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-policysync\") pod \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " Jun 25 16:24:41.034424 kubelet[2308]: I0625 16:24:41.034216 2308 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-flexvol-driver-host\") pod \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " Jun 25 16:24:41.034424 kubelet[2308]: I0625 16:24:41.034239 2308 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-xtables-lock\") pod \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " Jun 25 16:24:41.034424 kubelet[2308]: I0625 16:24:41.034262 2308 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-cni-bin-dir\") pod \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " Jun 25 16:24:41.034424 kubelet[2308]: I0625 16:24:41.034289 2308 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c46319b9-0d24-471c-be81-5dfbc39dfc7e-node-certs\") pod \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " Jun 25 16:24:41.035040 kubelet[2308]: I0625 16:24:41.034312 2308 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-cni-log-dir\") pod \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " Jun 25 16:24:41.035040 kubelet[2308]: I0625 16:24:41.034330 2308 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-lib-modules\") pod \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " Jun 25 16:24:41.035040 kubelet[2308]: I0625 16:24:41.034346 2308 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-var-lib-calico\") pod \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " Jun 25 16:24:41.035040 kubelet[2308]: I0625 16:24:41.034372 2308 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c46319b9-0d24-471c-be81-5dfbc39dfc7e-tigera-ca-bundle\") pod \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " Jun 25 16:24:41.035040 kubelet[2308]: I0625 16:24:41.034390 2308 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-cni-net-dir\") pod \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " Jun 25 16:24:41.035040 kubelet[2308]: I0625 16:24:41.034417 2308 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-var-run-calico\") pod \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\" (UID: \"c46319b9-0d24-471c-be81-5dfbc39dfc7e\") " Jun 25 16:24:41.035234 kubelet[2308]: I0625 16:24:41.034511 2308 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "c46319b9-0d24-471c-be81-5dfbc39dfc7e" (UID: "c46319b9-0d24-471c-be81-5dfbc39dfc7e"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:24:41.035234 kubelet[2308]: I0625 16:24:41.034560 2308 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-policysync" (OuterVolumeSpecName: "policysync") pod "c46319b9-0d24-471c-be81-5dfbc39dfc7e" (UID: "c46319b9-0d24-471c-be81-5dfbc39dfc7e"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:24:41.035234 kubelet[2308]: I0625 16:24:41.034580 2308 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c46319b9-0d24-471c-be81-5dfbc39dfc7e" (UID: "c46319b9-0d24-471c-be81-5dfbc39dfc7e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:24:41.035234 kubelet[2308]: I0625 16:24:41.034578 2308 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "c46319b9-0d24-471c-be81-5dfbc39dfc7e" (UID: "c46319b9-0d24-471c-be81-5dfbc39dfc7e"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:24:41.035234 kubelet[2308]: I0625 16:24:41.034596 2308 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "c46319b9-0d24-471c-be81-5dfbc39dfc7e" (UID: "c46319b9-0d24-471c-be81-5dfbc39dfc7e"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:24:41.035478 kubelet[2308]: I0625 16:24:41.034640 2308 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "c46319b9-0d24-471c-be81-5dfbc39dfc7e" (UID: "c46319b9-0d24-471c-be81-5dfbc39dfc7e"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:24:41.035478 kubelet[2308]: I0625 16:24:41.034656 2308 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c46319b9-0d24-471c-be81-5dfbc39dfc7e" (UID: "c46319b9-0d24-471c-be81-5dfbc39dfc7e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:24:41.035478 kubelet[2308]: I0625 16:24:41.034669 2308 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "c46319b9-0d24-471c-be81-5dfbc39dfc7e" (UID: "c46319b9-0d24-471c-be81-5dfbc39dfc7e"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:24:41.035478 kubelet[2308]: I0625 16:24:41.034993 2308 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c46319b9-0d24-471c-be81-5dfbc39dfc7e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c46319b9-0d24-471c-be81-5dfbc39dfc7e" (UID: "c46319b9-0d24-471c-be81-5dfbc39dfc7e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 16:24:41.035478 kubelet[2308]: I0625 16:24:41.035029 2308 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "c46319b9-0d24-471c-be81-5dfbc39dfc7e" (UID: "c46319b9-0d24-471c-be81-5dfbc39dfc7e"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:24:41.037932 kubelet[2308]: I0625 16:24:41.037887 2308 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46319b9-0d24-471c-be81-5dfbc39dfc7e-kube-api-access-pq2gz" (OuterVolumeSpecName: "kube-api-access-pq2gz") pod "c46319b9-0d24-471c-be81-5dfbc39dfc7e" (UID: "c46319b9-0d24-471c-be81-5dfbc39dfc7e"). InnerVolumeSpecName "kube-api-access-pq2gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 16:24:41.040637 kubelet[2308]: I0625 16:24:41.040448 2308 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c46319b9-0d24-471c-be81-5dfbc39dfc7e-node-certs" (OuterVolumeSpecName: "node-certs") pod "c46319b9-0d24-471c-be81-5dfbc39dfc7e" (UID: "c46319b9-0d24-471c-be81-5dfbc39dfc7e"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 16:24:41.135711 kubelet[2308]: I0625 16:24:41.135618 2308 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jun 25 16:24:41.135711 kubelet[2308]: I0625 16:24:41.135672 2308 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pq2gz\" (UniqueName: \"kubernetes.io/projected/c46319b9-0d24-471c-be81-5dfbc39dfc7e-kube-api-access-pq2gz\") on node \"localhost\" DevicePath \"\"" Jun 25 16:24:41.135711 kubelet[2308]: I0625 16:24:41.135699 2308 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-policysync\") on node \"localhost\" DevicePath \"\"" Jun 25 16:24:41.135711 kubelet[2308]: I0625 16:24:41.135714 2308 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jun 25 16:24:41.135711 kubelet[2308]: I0625 16:24:41.135723 2308 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 25 16:24:41.136453 kubelet[2308]: I0625 16:24:41.135732 2308 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 16:24:41.136453 kubelet[2308]: I0625 16:24:41.135742 2308 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c46319b9-0d24-471c-be81-5dfbc39dfc7e-node-certs\") on node \"localhost\" DevicePath \"\"" Jun 25 16:24:41.136453 kubelet[2308]: I0625 16:24:41.135750 2308 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 16:24:41.136453 kubelet[2308]: I0625 16:24:41.135759 2308 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 25 16:24:41.136453 kubelet[2308]: I0625 16:24:41.135767 2308 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jun 25 16:24:41.136453 kubelet[2308]: I0625 16:24:41.135774 2308 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c46319b9-0d24-471c-be81-5dfbc39dfc7e-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 25 16:24:41.136453 kubelet[2308]: I0625 16:24:41.135782 2308 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c46319b9-0d24-471c-be81-5dfbc39dfc7e-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 16:24:41.293689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f-rootfs.mount: Deactivated successfully. Jun 25 16:24:41.293799 systemd[1]: var-lib-kubelet-pods-c46319b9\x2d0d24\x2d471c\x2dbe81\x2d5dfbc39dfc7e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpq2gz.mount: Deactivated successfully. Jun 25 16:24:41.293884 systemd[1]: var-lib-kubelet-pods-c46319b9\x2d0d24\x2d471c\x2dbe81\x2d5dfbc39dfc7e-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 16:24:41.734078 kubelet[2308]: E0625 16:24:41.733987 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:24:41.885596 kubelet[2308]: I0625 16:24:41.885551 2308 scope.go:117] "RemoveContainer" containerID="f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4" Jun 25 16:24:41.887599 containerd[1293]: time="2024-06-25T16:24:41.887553214Z" level=info msg="RemoveContainer for \"f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4\"" Jun 25 16:24:41.902360 containerd[1293]: time="2024-06-25T16:24:41.901978890Z" level=info msg="RemoveContainer for \"f33cb734975907b24e2bb99727c764c986cbe0f1a7e20366710d1874a19288e4\" returns successfully" Jun 25 16:24:41.906187 systemd[1]: Removed slice kubepods-besteffort-podc46319b9_0d24_471c_be81_5dfbc39dfc7e.slice - libcontainer container kubepods-besteffort-podc46319b9_0d24_471c_be81_5dfbc39dfc7e.slice. Jun 25 16:24:41.949189 kubelet[2308]: I0625 16:24:41.949099 2308 topology_manager.go:215] "Topology Admit Handler" podUID="c6bedbe2-b620-439b-be86-65a2f44516d3" podNamespace="calico-system" podName="calico-node-6sn6p" Jun 25 16:24:41.950747 kubelet[2308]: E0625 16:24:41.950729 2308 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c46319b9-0d24-471c-be81-5dfbc39dfc7e" containerName="flexvol-driver" Jun 25 16:24:41.950864 kubelet[2308]: I0625 16:24:41.950779 2308 memory_manager.go:354] "RemoveStaleState removing state" podUID="c46319b9-0d24-471c-be81-5dfbc39dfc7e" containerName="flexvol-driver" Jun 25 16:24:41.960080 systemd[1]: Created slice kubepods-besteffort-podc6bedbe2_b620_439b_be86_65a2f44516d3.slice - libcontainer container kubepods-besteffort-podc6bedbe2_b620_439b_be86_65a2f44516d3.slice. Jun 25 16:24:42.044538 kubelet[2308]: I0625 16:24:42.044385 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c6bedbe2-b620-439b-be86-65a2f44516d3-cni-net-dir\") pod \"calico-node-6sn6p\" (UID: \"c6bedbe2-b620-439b-be86-65a2f44516d3\") " pod="calico-system/calico-node-6sn6p" Jun 25 16:24:42.044538 kubelet[2308]: I0625 16:24:42.044450 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c6bedbe2-b620-439b-be86-65a2f44516d3-flexvol-driver-host\") pod \"calico-node-6sn6p\" (UID: \"c6bedbe2-b620-439b-be86-65a2f44516d3\") " pod="calico-system/calico-node-6sn6p" Jun 25 16:24:42.044538 kubelet[2308]: I0625 16:24:42.044474 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6bedbe2-b620-439b-be86-65a2f44516d3-xtables-lock\") pod \"calico-node-6sn6p\" (UID: \"c6bedbe2-b620-439b-be86-65a2f44516d3\") " pod="calico-system/calico-node-6sn6p" Jun 25 16:24:42.044538 kubelet[2308]: I0625 16:24:42.044493 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c6bedbe2-b620-439b-be86-65a2f44516d3-var-run-calico\") pod \"calico-node-6sn6p\" (UID: \"c6bedbe2-b620-439b-be86-65a2f44516d3\") " pod="calico-system/calico-node-6sn6p" Jun 25 16:24:42.044538 kubelet[2308]: I0625 16:24:42.044512 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c6bedbe2-b620-439b-be86-65a2f44516d3-var-lib-calico\") pod \"calico-node-6sn6p\" (UID: \"c6bedbe2-b620-439b-be86-65a2f44516d3\") " pod="calico-system/calico-node-6sn6p" Jun 25 16:24:42.045112 kubelet[2308]: I0625 16:24:42.044878 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pvdp\" (UniqueName: \"kubernetes.io/projected/c6bedbe2-b620-439b-be86-65a2f44516d3-kube-api-access-2pvdp\") pod \"calico-node-6sn6p\" (UID: \"c6bedbe2-b620-439b-be86-65a2f44516d3\") " pod="calico-system/calico-node-6sn6p" Jun 25 16:24:42.045112 kubelet[2308]: I0625 16:24:42.044910 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6bedbe2-b620-439b-be86-65a2f44516d3-tigera-ca-bundle\") pod \"calico-node-6sn6p\" (UID: \"c6bedbe2-b620-439b-be86-65a2f44516d3\") " pod="calico-system/calico-node-6sn6p" Jun 25 16:24:42.045112 kubelet[2308]: I0625 16:24:42.044928 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c6bedbe2-b620-439b-be86-65a2f44516d3-cni-log-dir\") pod \"calico-node-6sn6p\" (UID: \"c6bedbe2-b620-439b-be86-65a2f44516d3\") " pod="calico-system/calico-node-6sn6p" Jun 25 16:24:42.045112 kubelet[2308]: I0625 16:24:42.044949 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c6bedbe2-b620-439b-be86-65a2f44516d3-cni-bin-dir\") pod \"calico-node-6sn6p\" (UID: \"c6bedbe2-b620-439b-be86-65a2f44516d3\") " pod="calico-system/calico-node-6sn6p" Jun 25 16:24:42.045112 kubelet[2308]: I0625 16:24:42.044968 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6bedbe2-b620-439b-be86-65a2f44516d3-lib-modules\") pod \"calico-node-6sn6p\" (UID: \"c6bedbe2-b620-439b-be86-65a2f44516d3\") " pod="calico-system/calico-node-6sn6p" Jun 25 16:24:42.045313 kubelet[2308]: I0625 16:24:42.044986 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c6bedbe2-b620-439b-be86-65a2f44516d3-policysync\") pod \"calico-node-6sn6p\" (UID: \"c6bedbe2-b620-439b-be86-65a2f44516d3\") " pod="calico-system/calico-node-6sn6p" Jun 25 16:24:42.045313 kubelet[2308]: I0625 16:24:42.045068 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c6bedbe2-b620-439b-be86-65a2f44516d3-node-certs\") pod \"calico-node-6sn6p\" (UID: \"c6bedbe2-b620-439b-be86-65a2f44516d3\") " pod="calico-system/calico-node-6sn6p" Jun 25 16:24:42.263668 kubelet[2308]: E0625 16:24:42.263514 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:42.264265 containerd[1293]: time="2024-06-25T16:24:42.264203852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6sn6p,Uid:c6bedbe2-b620-439b-be86-65a2f44516d3,Namespace:calico-system,Attempt:0,}" Jun 25 16:24:42.414967 containerd[1293]: time="2024-06-25T16:24:42.414888937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:42.414967 containerd[1293]: time="2024-06-25T16:24:42.414940042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:42.414967 containerd[1293]: time="2024-06-25T16:24:42.414962705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:42.414967 containerd[1293]: time="2024-06-25T16:24:42.414975719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:42.437097 systemd[1]: Started cri-containerd-b193807c2735593633ec00db180813350eac38803d4d98c896d43e0a4e6a176f.scope - libcontainer container b193807c2735593633ec00db180813350eac38803d4d98c896d43e0a4e6a176f. Jun 25 16:24:42.444000 audit: BPF prog-id=129 op=LOAD Jun 25 16:24:42.445000 audit: BPF prog-id=130 op=LOAD Jun 25 16:24:42.445000 audit[3059]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3048 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:42.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231393338303763323733353539333633336563303064623138303831 Jun 25 16:24:42.445000 audit: BPF prog-id=131 op=LOAD Jun 25 16:24:42.445000 audit[3059]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3048 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:42.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231393338303763323733353539333633336563303064623138303831 Jun 25 16:24:42.445000 audit: BPF prog-id=131 op=UNLOAD Jun 25 16:24:42.445000 audit: BPF prog-id=130 op=UNLOAD Jun 25 16:24:42.445000 audit: BPF prog-id=132 op=LOAD Jun 25 16:24:42.445000 audit[3059]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3048 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:42.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231393338303763323733353539333633336563303064623138303831 Jun 25 16:24:42.457665 containerd[1293]: time="2024-06-25T16:24:42.457612814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6sn6p,Uid:c6bedbe2-b620-439b-be86-65a2f44516d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"b193807c2735593633ec00db180813350eac38803d4d98c896d43e0a4e6a176f\"" Jun 25 16:24:42.458107 kubelet[2308]: E0625 16:24:42.458088 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:42.460037 containerd[1293]: time="2024-06-25T16:24:42.460012481Z" level=info msg="CreateContainer within sandbox \"b193807c2735593633ec00db180813350eac38803d4d98c896d43e0a4e6a176f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:24:42.516123 containerd[1293]: time="2024-06-25T16:24:42.515958341Z" level=info msg="CreateContainer within sandbox \"b193807c2735593633ec00db180813350eac38803d4d98c896d43e0a4e6a176f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"47a4486b1102e9321ab31df6259d19f006fe8fb499bd6158df14ee6d3acae86d\"" Jun 25 16:24:42.516811 containerd[1293]: time="2024-06-25T16:24:42.516754325Z" level=info msg="StartContainer for \"47a4486b1102e9321ab31df6259d19f006fe8fb499bd6158df14ee6d3acae86d\"" Jun 25 16:24:42.552075 systemd[1]: Started cri-containerd-47a4486b1102e9321ab31df6259d19f006fe8fb499bd6158df14ee6d3acae86d.scope - libcontainer container 47a4486b1102e9321ab31df6259d19f006fe8fb499bd6158df14ee6d3acae86d. Jun 25 16:24:42.569000 audit: BPF prog-id=133 op=LOAD Jun 25 16:24:42.569000 audit[3089]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3048 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:42.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437613434383662313130326539333231616233316466363235396431 Jun 25 16:24:42.569000 audit: BPF prog-id=134 op=LOAD Jun 25 16:24:42.569000 audit[3089]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3048 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:42.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437613434383662313130326539333231616233316466363235396431 Jun 25 16:24:42.569000 audit: BPF prog-id=134 op=UNLOAD Jun 25 16:24:42.569000 audit: BPF prog-id=133 op=UNLOAD Jun 25 16:24:42.569000 audit: BPF prog-id=135 op=LOAD Jun 25 16:24:42.569000 audit[3089]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3048 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:42.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437613434383662313130326539333231616233316466363235396431 Jun 25 16:24:42.596313 systemd[1]: cri-containerd-47a4486b1102e9321ab31df6259d19f006fe8fb499bd6158df14ee6d3acae86d.scope: Deactivated successfully. Jun 25 16:24:42.599000 audit: BPF prog-id=135 op=UNLOAD Jun 25 16:24:42.811084 containerd[1293]: time="2024-06-25T16:24:42.810346319Z" level=info msg="StartContainer for \"47a4486b1102e9321ab31df6259d19f006fe8fb499bd6158df14ee6d3acae86d\" returns successfully" Jun 25 16:24:42.811263 kubelet[2308]: I0625 16:24:42.811100 2308 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c46319b9-0d24-471c-be81-5dfbc39dfc7e" path="/var/lib/kubelet/pods/c46319b9-0d24-471c-be81-5dfbc39dfc7e/volumes" Jun 25 16:24:42.889562 kubelet[2308]: E0625 16:24:42.889532 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:42.992240 containerd[1293]: time="2024-06-25T16:24:42.992172669Z" level=info msg="shim disconnected" id=47a4486b1102e9321ab31df6259d19f006fe8fb499bd6158df14ee6d3acae86d namespace=k8s.io Jun 25 16:24:42.992240 containerd[1293]: time="2024-06-25T16:24:42.992232531Z" level=warning msg="cleaning up after shim disconnected" id=47a4486b1102e9321ab31df6259d19f006fe8fb499bd6158df14ee6d3acae86d namespace=k8s.io Jun 25 16:24:42.992240 containerd[1293]: time="2024-06-25T16:24:42.992241087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:24:43.734183 kubelet[2308]: E0625 16:24:43.734134 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:24:43.892733 kubelet[2308]: E0625 16:24:43.892692 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:43.893599 containerd[1293]: time="2024-06-25T16:24:43.893322059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:24:45.734448 kubelet[2308]: E0625 16:24:45.734386 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:24:47.386855 kernel: kauditd_printk_skb: 28 callbacks suppressed Jun 25 16:24:47.387018 kernel: audit: type=1130 audit(1719332687.368:490): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.104:22-10.0.0.1:42368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:47.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.104:22-10.0.0.1:42368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:47.368887 systemd[1]: Started sshd@7-10.0.0.104:22-10.0.0.1:42368.service - OpenSSH per-connection server daemon (10.0.0.1:42368). Jun 25 16:24:47.422000 audit[3145]: USER_ACCT pid=3145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:47.423547 sshd[3145]: Accepted publickey for core from 10.0.0.1 port 42368 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:24:47.425026 sshd[3145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:47.423000 audit[3145]: CRED_ACQ pid=3145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:47.430056 systemd-logind[1278]: New session 8 of user core. Jun 25 16:24:47.430880 kernel: audit: type=1101 audit(1719332687.422:491): pid=3145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:47.430918 kernel: audit: type=1103 audit(1719332687.423:492): pid=3145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:47.430940 kernel: audit: type=1006 audit(1719332687.423:493): pid=3145 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jun 25 16:24:47.423000 audit[3145]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde0af5d30 a2=3 a3=7f3640181480 items=0 ppid=1 pid=3145 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:47.438724 kernel: audit: type=1300 audit(1719332687.423:493): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde0af5d30 a2=3 a3=7f3640181480 items=0 ppid=1 pid=3145 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:47.438847 kernel: audit: type=1327 audit(1719332687.423:493): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:47.423000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:47.447164 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:24:47.451000 audit[3145]: USER_START pid=3145 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:47.457853 kernel: audit: type=1105 audit(1719332687.451:494): pid=3145 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:47.458016 kernel: audit: type=1103 audit(1719332687.453:495): pid=3147 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:47.453000 audit[3147]: CRED_ACQ pid=3147 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:47.562781 sshd[3145]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:47.563000 audit[3145]: USER_END pid=3145 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:47.565466 systemd[1]: sshd@7-10.0.0.104:22-10.0.0.1:42368.service: Deactivated successfully. Jun 25 16:24:47.566196 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:24:47.566803 systemd-logind[1278]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:24:47.567596 systemd-logind[1278]: Removed session 8. Jun 25 16:24:47.563000 audit[3145]: CRED_DISP pid=3145 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:47.573636 kernel: audit: type=1106 audit(1719332687.563:496): pid=3145 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:47.573746 kernel: audit: type=1104 audit(1719332687.563:497): pid=3145 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:47.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.104:22-10.0.0.1:42368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:47.734442 kubelet[2308]: E0625 16:24:47.734293 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:24:49.813856 kubelet[2308]: E0625 16:24:49.813780 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:24:51.737863 kubelet[2308]: E0625 16:24:51.736360 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:24:52.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.104:22-10.0.0.1:42384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.577471 systemd[1]: Started sshd@8-10.0.0.104:22-10.0.0.1:42384.service - OpenSSH per-connection server daemon (10.0.0.1:42384). Jun 25 16:24:52.610065 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:24:52.610296 kernel: audit: type=1130 audit(1719332692.576:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.104:22-10.0.0.1:42384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:52.994000 audit[3163]: USER_ACCT pid=3163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.995398 sshd[3163]: Accepted publickey for core from 10.0.0.1 port 42384 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:24:52.997103 sshd[3163]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:53.011905 kernel: audit: type=1101 audit(1719332692.994:500): pid=3163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:53.012103 kernel: audit: type=1103 audit(1719332692.994:501): pid=3163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:52.994000 audit[3163]: CRED_ACQ pid=3163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:53.017272 kernel: audit: type=1006 audit(1719332692.994:502): pid=3163 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jun 25 16:24:53.017670 kernel: audit: type=1300 audit(1719332692.994:502): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdfdac720 a2=3 a3=7f9ced97e480 items=0 ppid=1 pid=3163 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:52.994000 audit[3163]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdfdac720 a2=3 a3=7f9ced97e480 items=0 ppid=1 pid=3163 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:53.023926 kernel: audit: type=1327 audit(1719332692.994:502): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:52.994000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:53.020993 systemd-logind[1278]: New session 9 of user core. Jun 25 16:24:53.037919 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:24:53.055000 audit[3163]: USER_START pid=3163 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:53.065714 kernel: audit: type=1105 audit(1719332693.055:503): pid=3163 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:53.066000 audit[3165]: CRED_ACQ pid=3165 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:53.071589 kernel: audit: type=1103 audit(1719332693.066:504): pid=3165 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:53.558309 sshd[3163]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:53.563000 audit[3163]: USER_END pid=3163 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:53.567217 systemd[1]: sshd@8-10.0.0.104:22-10.0.0.1:42384.service: Deactivated successfully. Jun 25 16:24:53.597404 kernel: audit: type=1106 audit(1719332693.563:505): pid=3163 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:53.597507 kernel: audit: type=1104 audit(1719332693.563:506): pid=3163 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:53.563000 audit[3163]: CRED_DISP pid=3163 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:53.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.104:22-10.0.0.1:42384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:53.568136 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:24:53.569098 systemd-logind[1278]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:24:53.571289 systemd-logind[1278]: Removed session 9. Jun 25 16:24:53.734602 kubelet[2308]: E0625 16:24:53.733864 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:24:54.819263 containerd[1293]: time="2024-06-25T16:24:54.814216763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:54.825012 containerd[1293]: time="2024-06-25T16:24:54.819944048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:24:54.827851 containerd[1293]: time="2024-06-25T16:24:54.827317263Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:54.846191 containerd[1293]: time="2024-06-25T16:24:54.846130701Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:54.853181 containerd[1293]: time="2024-06-25T16:24:54.851780872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:54.853515 containerd[1293]: time="2024-06-25T16:24:54.853466955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 10.960086978s" Jun 25 16:24:54.853608 containerd[1293]: time="2024-06-25T16:24:54.853588883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:24:54.906041 containerd[1293]: time="2024-06-25T16:24:54.904111545Z" level=info msg="CreateContainer within sandbox \"b193807c2735593633ec00db180813350eac38803d4d98c896d43e0a4e6a176f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:24:55.735224 kubelet[2308]: E0625 16:24:55.734729 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:24:55.962507 containerd[1293]: time="2024-06-25T16:24:55.961618443Z" level=info msg="CreateContainer within sandbox \"b193807c2735593633ec00db180813350eac38803d4d98c896d43e0a4e6a176f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8e4a49a6fc8bf32576df02ac10635a7fe5a43b534cec77246e093555b7aa72ac\"" Jun 25 16:24:55.967409 containerd[1293]: time="2024-06-25T16:24:55.964210376Z" level=info msg="StartContainer for \"8e4a49a6fc8bf32576df02ac10635a7fe5a43b534cec77246e093555b7aa72ac\"" Jun 25 16:24:56.026609 systemd[1]: run-containerd-runc-k8s.io-8e4a49a6fc8bf32576df02ac10635a7fe5a43b534cec77246e093555b7aa72ac-runc.H611nJ.mount: Deactivated successfully. Jun 25 16:24:56.042024 systemd[1]: Started cri-containerd-8e4a49a6fc8bf32576df02ac10635a7fe5a43b534cec77246e093555b7aa72ac.scope - libcontainer container 8e4a49a6fc8bf32576df02ac10635a7fe5a43b534cec77246e093555b7aa72ac. Jun 25 16:24:56.057000 audit: BPF prog-id=136 op=LOAD Jun 25 16:24:56.057000 audit[3192]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=3048 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:56.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865346134396136666338626633323537366466303261633130363335 Jun 25 16:24:56.059000 audit: BPF prog-id=137 op=LOAD Jun 25 16:24:56.059000 audit[3192]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=3048 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:56.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865346134396136666338626633323537366466303261633130363335 Jun 25 16:24:56.059000 audit: BPF prog-id=137 op=UNLOAD Jun 25 16:24:56.059000 audit: BPF prog-id=136 op=UNLOAD Jun 25 16:24:56.059000 audit: BPF prog-id=138 op=LOAD Jun 25 16:24:56.059000 audit[3192]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=3048 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:56.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865346134396136666338626633323537366466303261633130363335 Jun 25 16:24:57.227775 containerd[1293]: time="2024-06-25T16:24:57.227685001Z" level=info msg="StartContainer for \"8e4a49a6fc8bf32576df02ac10635a7fe5a43b534cec77246e093555b7aa72ac\" returns successfully" Jun 25 16:24:57.237155 kubelet[2308]: E0625 16:24:57.236001 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:24:57.735208 kubelet[2308]: E0625 16:24:57.735083 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:24:58.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.104:22-10.0.0.1:43544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:58.580161 systemd[1]: Started sshd@9-10.0.0.104:22-10.0.0.1:43544.service - OpenSSH per-connection server daemon (10.0.0.1:43544). Jun 25 16:24:58.669002 kernel: kauditd_printk_skb: 12 callbacks suppressed Jun 25 16:24:58.669324 kernel: audit: type=1130 audit(1719332698.579:513): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.104:22-10.0.0.1:43544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:59.326000 audit[3221]: USER_ACCT pid=3221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:59.339036 sshd[3221]: Accepted publickey for core from 10.0.0.1 port 43544 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:24:59.328000 audit[3221]: CRED_ACQ pid=3221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:59.365819 sshd[3221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:59.370368 kernel: audit: type=1101 audit(1719332699.326:514): pid=3221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:59.370515 kernel: audit: type=1103 audit(1719332699.328:515): pid=3221 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:59.381980 kernel: audit: type=1006 audit(1719332699.328:516): pid=3221 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 16:24:59.328000 audit[3221]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd47942b0 a2=3 a3=7fecbb69a480 items=0 ppid=1 pid=3221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:59.328000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:59.415883 kernel: audit: type=1300 audit(1719332699.328:516): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd47942b0 a2=3 a3=7fecbb69a480 items=0 ppid=1 pid=3221 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:59.416038 kernel: audit: type=1327 audit(1719332699.328:516): proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:59.417225 systemd-logind[1278]: New session 10 of user core. Jun 25 16:24:59.425221 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:24:59.440000 audit[3221]: USER_START pid=3221 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:59.451565 kernel: audit: type=1105 audit(1719332699.440:517): pid=3221 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:59.443000 audit[3225]: CRED_ACQ pid=3225 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:59.476667 kernel: audit: type=1103 audit(1719332699.443:518): pid=3225 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:24:59.734255 kubelet[2308]: E0625 16:24:59.734005 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:25:00.237233 sshd[3221]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:00.239000 audit[3221]: USER_END pid=3221 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:00.244773 systemd[1]: sshd@9-10.0.0.104:22-10.0.0.1:43544.service: Deactivated successfully. Jun 25 16:25:00.245637 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:25:00.239000 audit[3221]: CRED_DISP pid=3221 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:00.251919 systemd-logind[1278]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:25:00.252231 kernel: audit: type=1106 audit(1719332700.239:519): pid=3221 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:00.252391 kernel: audit: type=1104 audit(1719332700.239:520): pid=3221 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:00.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.104:22-10.0.0.1:43544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:00.254741 systemd-logind[1278]: Removed session 10. Jun 25 16:25:01.265923 containerd[1293]: time="2024-06-25T16:25:01.264708298Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:25:01.281524 systemd[1]: cri-containerd-8e4a49a6fc8bf32576df02ac10635a7fe5a43b534cec77246e093555b7aa72ac.scope: Deactivated successfully. Jun 25 16:25:01.301715 kubelet[2308]: I0625 16:25:01.287375 2308 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 16:25:01.301000 audit: BPF prog-id=138 op=UNLOAD Jun 25 16:25:01.396188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e4a49a6fc8bf32576df02ac10635a7fe5a43b534cec77246e093555b7aa72ac-rootfs.mount: Deactivated successfully. Jun 25 16:25:01.680795 containerd[1293]: time="2024-06-25T16:25:01.678593871Z" level=info msg="shim disconnected" id=8e4a49a6fc8bf32576df02ac10635a7fe5a43b534cec77246e093555b7aa72ac namespace=k8s.io Jun 25 16:25:01.680795 containerd[1293]: time="2024-06-25T16:25:01.678782966Z" level=warning msg="cleaning up after shim disconnected" id=8e4a49a6fc8bf32576df02ac10635a7fe5a43b534cec77246e093555b7aa72ac namespace=k8s.io Jun 25 16:25:01.680795 containerd[1293]: time="2024-06-25T16:25:01.678797553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:25:01.754460 systemd[1]: Created slice kubepods-besteffort-poda82ce7d0_b43c_4d81_ae9f_10974dd66ff7.slice - libcontainer container kubepods-besteffort-poda82ce7d0_b43c_4d81_ae9f_10974dd66ff7.slice. Jun 25 16:25:01.772732 kubelet[2308]: I0625 16:25:01.766085 2308 topology_manager.go:215] "Topology Admit Handler" podUID="5419bba1-6081-4f31-bcc8-616bdda728d4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-27tds" Jun 25 16:25:01.772732 kubelet[2308]: I0625 16:25:01.766231 2308 topology_manager.go:215] "Topology Admit Handler" podUID="6b6ad376-38df-49e4-8a9f-dd64acf97dda" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xmnbk" Jun 25 16:25:01.772732 kubelet[2308]: I0625 16:25:01.771348 2308 topology_manager.go:215] "Topology Admit Handler" podUID="ef011844-7458-4dc5-b4b3-48140b3ba006" podNamespace="calico-system" podName="calico-kube-controllers-678c89559-dn457" Jun 25 16:25:01.772732 kubelet[2308]: I0625 16:25:01.772065 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef011844-7458-4dc5-b4b3-48140b3ba006-tigera-ca-bundle\") pod \"calico-kube-controllers-678c89559-dn457\" (UID: \"ef011844-7458-4dc5-b4b3-48140b3ba006\") " pod="calico-system/calico-kube-controllers-678c89559-dn457" Jun 25 16:25:01.772732 kubelet[2308]: I0625 16:25:01.772100 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5419bba1-6081-4f31-bcc8-616bdda728d4-config-volume\") pod \"coredns-7db6d8ff4d-27tds\" (UID: \"5419bba1-6081-4f31-bcc8-616bdda728d4\") " pod="kube-system/coredns-7db6d8ff4d-27tds" Jun 25 16:25:01.772732 kubelet[2308]: I0625 16:25:01.772135 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b6ad376-38df-49e4-8a9f-dd64acf97dda-config-volume\") pod \"coredns-7db6d8ff4d-xmnbk\" (UID: \"6b6ad376-38df-49e4-8a9f-dd64acf97dda\") " pod="kube-system/coredns-7db6d8ff4d-xmnbk" Jun 25 16:25:01.773085 kubelet[2308]: I0625 16:25:01.772164 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27pvd\" (UniqueName: \"kubernetes.io/projected/ef011844-7458-4dc5-b4b3-48140b3ba006-kube-api-access-27pvd\") pod \"calico-kube-controllers-678c89559-dn457\" (UID: \"ef011844-7458-4dc5-b4b3-48140b3ba006\") " pod="calico-system/calico-kube-controllers-678c89559-dn457" Jun 25 16:25:01.773085 kubelet[2308]: I0625 16:25:01.772199 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jj8d\" (UniqueName: \"kubernetes.io/projected/6b6ad376-38df-49e4-8a9f-dd64acf97dda-kube-api-access-2jj8d\") pod \"coredns-7db6d8ff4d-xmnbk\" (UID: \"6b6ad376-38df-49e4-8a9f-dd64acf97dda\") " pod="kube-system/coredns-7db6d8ff4d-xmnbk" Jun 25 16:25:01.773085 kubelet[2308]: I0625 16:25:01.772237 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7g6l\" (UniqueName: \"kubernetes.io/projected/5419bba1-6081-4f31-bcc8-616bdda728d4-kube-api-access-h7g6l\") pod \"coredns-7db6d8ff4d-27tds\" (UID: \"5419bba1-6081-4f31-bcc8-616bdda728d4\") " pod="kube-system/coredns-7db6d8ff4d-27tds" Jun 25 16:25:01.788165 containerd[1293]: time="2024-06-25T16:25:01.785624793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bw7kv,Uid:a82ce7d0-b43c-4d81-ae9f-10974dd66ff7,Namespace:calico-system,Attempt:0,}" Jun 25 16:25:01.832096 systemd[1]: Created slice kubepods-burstable-pod6b6ad376_38df_49e4_8a9f_dd64acf97dda.slice - libcontainer container kubepods-burstable-pod6b6ad376_38df_49e4_8a9f_dd64acf97dda.slice. Jun 25 16:25:01.977136 systemd[1]: Created slice kubepods-burstable-pod5419bba1_6081_4f31_bcc8_616bdda728d4.slice - libcontainer container kubepods-burstable-pod5419bba1_6081_4f31_bcc8_616bdda728d4.slice. Jun 25 16:25:02.005476 systemd[1]: Created slice kubepods-besteffort-podef011844_7458_4dc5_b4b3_48140b3ba006.slice - libcontainer container kubepods-besteffort-podef011844_7458_4dc5_b4b3_48140b3ba006.slice. Jun 25 16:25:02.259784 kubelet[2308]: E0625 16:25:02.255923 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:02.259953 containerd[1293]: time="2024-06-25T16:25:02.256788021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xmnbk,Uid:6b6ad376-38df-49e4-8a9f-dd64acf97dda,Namespace:kube-system,Attempt:0,}" Jun 25 16:25:02.272418 kubelet[2308]: E0625 16:25:02.271040 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:02.298681 kubelet[2308]: E0625 16:25:02.294003 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:02.298681 kubelet[2308]: I0625 16:25:02.294868 2308 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:25:02.298681 kubelet[2308]: E0625 16:25:02.295483 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:02.299256 containerd[1293]: time="2024-06-25T16:25:02.299214074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-27tds,Uid:5419bba1-6081-4f31-bcc8-616bdda728d4,Namespace:kube-system,Attempt:0,}" Jun 25 16:25:02.315235 containerd[1293]: time="2024-06-25T16:25:02.315188165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:25:02.333697 containerd[1293]: time="2024-06-25T16:25:02.328197956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-678c89559-dn457,Uid:ef011844-7458-4dc5-b4b3-48140b3ba006,Namespace:calico-system,Attempt:0,}" Jun 25 16:25:02.613000 audit[3282]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:02.613000 audit[3282]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd6aebbd20 a2=0 a3=7ffd6aebbd0c items=0 ppid=2514 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:02.613000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:02.641000 audit[3282]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:02.641000 audit[3282]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd6aebbd20 a2=0 a3=7ffd6aebbd0c items=0 ppid=2514 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:02.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:02.956398 containerd[1293]: time="2024-06-25T16:25:02.954772087Z" level=error msg="Failed to destroy network for sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:02.959378 containerd[1293]: time="2024-06-25T16:25:02.959303297Z" level=error msg="encountered an error cleaning up failed sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:02.959478 containerd[1293]: time="2024-06-25T16:25:02.959419204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bw7kv,Uid:a82ce7d0-b43c-4d81-ae9f-10974dd66ff7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:02.959690 kubelet[2308]: E0625 16:25:02.959641 2308 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:02.960011 kubelet[2308]: E0625 16:25:02.959726 2308 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bw7kv" Jun 25 16:25:02.960011 kubelet[2308]: E0625 16:25:02.959756 2308 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bw7kv" Jun 25 16:25:02.960011 kubelet[2308]: E0625 16:25:02.959809 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bw7kv_calico-system(a82ce7d0-b43c-4d81-ae9f-10974dd66ff7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bw7kv_calico-system(a82ce7d0-b43c-4d81-ae9f-10974dd66ff7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:25:03.100048 containerd[1293]: time="2024-06-25T16:25:03.099973819Z" level=error msg="Failed to destroy network for sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.113867 containerd[1293]: time="2024-06-25T16:25:03.106151727Z" level=error msg="Failed to destroy network for sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.113867 containerd[1293]: time="2024-06-25T16:25:03.106899909Z" level=error msg="encountered an error cleaning up failed sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.113867 containerd[1293]: time="2024-06-25T16:25:03.106973347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-27tds,Uid:5419bba1-6081-4f31-bcc8-616bdda728d4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.114110 kubelet[2308]: E0625 16:25:03.107309 2308 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.114110 kubelet[2308]: E0625 16:25:03.107406 2308 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-27tds" Jun 25 16:25:03.114110 kubelet[2308]: E0625 16:25:03.107452 2308 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-27tds" Jun 25 16:25:03.114222 kubelet[2308]: E0625 16:25:03.107525 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-27tds_kube-system(5419bba1-6081-4f31-bcc8-616bdda728d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-27tds_kube-system(5419bba1-6081-4f31-bcc8-616bdda728d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-27tds" podUID="5419bba1-6081-4f31-bcc8-616bdda728d4" Jun 25 16:25:03.123689 containerd[1293]: time="2024-06-25T16:25:03.123617905Z" level=error msg="encountered an error cleaning up failed sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.124010 containerd[1293]: time="2024-06-25T16:25:03.123976777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xmnbk,Uid:6b6ad376-38df-49e4-8a9f-dd64acf97dda,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.124771 kubelet[2308]: E0625 16:25:03.124708 2308 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.124861 kubelet[2308]: E0625 16:25:03.124785 2308 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xmnbk" Jun 25 16:25:03.124861 kubelet[2308]: E0625 16:25:03.124813 2308 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xmnbk" Jun 25 16:25:03.127867 kubelet[2308]: E0625 16:25:03.125205 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xmnbk_kube-system(6b6ad376-38df-49e4-8a9f-dd64acf97dda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xmnbk_kube-system(6b6ad376-38df-49e4-8a9f-dd64acf97dda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xmnbk" podUID="6b6ad376-38df-49e4-8a9f-dd64acf97dda" Jun 25 16:25:03.273119 kubelet[2308]: I0625 16:25:03.272995 2308 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:25:03.274085 containerd[1293]: time="2024-06-25T16:25:03.274046086Z" level=info msg="StopPodSandbox for \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\"" Jun 25 16:25:03.274392 containerd[1293]: time="2024-06-25T16:25:03.274374292Z" level=info msg="Ensure that sandbox c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89 in task-service has been cleanup successfully" Jun 25 16:25:03.285949 kubelet[2308]: I0625 16:25:03.285918 2308 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:25:03.290239 containerd[1293]: time="2024-06-25T16:25:03.286703315Z" level=info msg="StopPodSandbox for \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\"" Jun 25 16:25:03.290239 containerd[1293]: time="2024-06-25T16:25:03.286955007Z" level=info msg="Ensure that sandbox 72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11 in task-service has been cleanup successfully" Jun 25 16:25:03.308193 kubelet[2308]: I0625 16:25:03.308154 2308 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:25:03.330852 kubelet[2308]: E0625 16:25:03.317301 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:03.340382 containerd[1293]: time="2024-06-25T16:25:03.325124225Z" level=info msg="StopPodSandbox for \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\"" Jun 25 16:25:03.340382 containerd[1293]: time="2024-06-25T16:25:03.325424708Z" level=info msg="Ensure that sandbox 7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d in task-service has been cleanup successfully" Jun 25 16:25:03.381263 containerd[1293]: time="2024-06-25T16:25:03.377985858Z" level=error msg="Failed to destroy network for sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.426245 containerd[1293]: time="2024-06-25T16:25:03.426162374Z" level=error msg="encountered an error cleaning up failed sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.427217 containerd[1293]: time="2024-06-25T16:25:03.427183549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-678c89559-dn457,Uid:ef011844-7458-4dc5-b4b3-48140b3ba006,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.436885 kubelet[2308]: E0625 16:25:03.433907 2308 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.436885 kubelet[2308]: E0625 16:25:03.433989 2308 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-678c89559-dn457" Jun 25 16:25:03.436885 kubelet[2308]: E0625 16:25:03.434016 2308 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-678c89559-dn457" Jun 25 16:25:03.437160 kubelet[2308]: E0625 16:25:03.434067 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-678c89559-dn457_calico-system(ef011844-7458-4dc5-b4b3-48140b3ba006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-678c89559-dn457_calico-system(ef011844-7458-4dc5-b4b3-48140b3ba006)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-678c89559-dn457" podUID="ef011844-7458-4dc5-b4b3-48140b3ba006" Jun 25 16:25:03.620758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61-shm.mount: Deactivated successfully. Jun 25 16:25:03.620878 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d-shm.mount: Deactivated successfully. Jun 25 16:25:03.620944 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89-shm.mount: Deactivated successfully. Jun 25 16:25:03.621007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11-shm.mount: Deactivated successfully. Jun 25 16:25:03.638189 containerd[1293]: time="2024-06-25T16:25:03.638093947Z" level=error msg="StopPodSandbox for \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\" failed" error="failed to destroy network for sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.638451 kubelet[2308]: E0625 16:25:03.638409 2308 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:25:03.638528 kubelet[2308]: E0625 16:25:03.638472 2308 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89"} Jun 25 16:25:03.638567 kubelet[2308]: E0625 16:25:03.638553 2308 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6b6ad376-38df-49e4-8a9f-dd64acf97dda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:25:03.646778 kubelet[2308]: E0625 16:25:03.638601 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6b6ad376-38df-49e4-8a9f-dd64acf97dda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xmnbk" podUID="6b6ad376-38df-49e4-8a9f-dd64acf97dda" Jun 25 16:25:03.666885 containerd[1293]: time="2024-06-25T16:25:03.665018003Z" level=error msg="StopPodSandbox for \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\" failed" error="failed to destroy network for sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.667119 kubelet[2308]: E0625 16:25:03.665295 2308 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:25:03.667119 kubelet[2308]: E0625 16:25:03.665360 2308 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11"} Jun 25 16:25:03.667119 kubelet[2308]: E0625 16:25:03.665406 2308 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:25:03.667119 kubelet[2308]: E0625 16:25:03.665437 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:25:03.735708 containerd[1293]: time="2024-06-25T16:25:03.735637502Z" level=error msg="StopPodSandbox for \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\" failed" error="failed to destroy network for sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:03.736327 kubelet[2308]: E0625 16:25:03.736126 2308 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:25:03.736327 kubelet[2308]: E0625 16:25:03.736185 2308 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d"} Jun 25 16:25:03.736327 kubelet[2308]: E0625 16:25:03.736225 2308 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5419bba1-6081-4f31-bcc8-616bdda728d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:25:03.736327 kubelet[2308]: E0625 16:25:03.736277 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5419bba1-6081-4f31-bcc8-616bdda728d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-27tds" podUID="5419bba1-6081-4f31-bcc8-616bdda728d4" Jun 25 16:25:04.326236 kubelet[2308]: I0625 16:25:04.325287 2308 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:25:04.326623 containerd[1293]: time="2024-06-25T16:25:04.326111130Z" level=info msg="StopPodSandbox for \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\"" Jun 25 16:25:04.331677 containerd[1293]: time="2024-06-25T16:25:04.329385050Z" level=info msg="Ensure that sandbox ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61 in task-service has been cleanup successfully" Jun 25 16:25:04.753617 containerd[1293]: time="2024-06-25T16:25:04.753307052Z" level=error msg="StopPodSandbox for \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\" failed" error="failed to destroy network for sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:04.784730 kubelet[2308]: E0625 16:25:04.783189 2308 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:25:04.784730 kubelet[2308]: E0625 16:25:04.783284 2308 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61"} Jun 25 16:25:04.784730 kubelet[2308]: E0625 16:25:04.783328 2308 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef011844-7458-4dc5-b4b3-48140b3ba006\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:25:04.784730 kubelet[2308]: E0625 16:25:04.783356 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef011844-7458-4dc5-b4b3-48140b3ba006\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-678c89559-dn457" podUID="ef011844-7458-4dc5-b4b3-48140b3ba006" Jun 25 16:25:05.296097 systemd[1]: Started sshd@10-10.0.0.104:22-10.0.0.1:43556.service - OpenSSH per-connection server daemon (10.0.0.1:43556). Jun 25 16:25:05.305205 kernel: kauditd_printk_skb: 8 callbacks suppressed Jun 25 16:25:05.305485 kernel: audit: type=1130 audit(1719332705.295:525): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.104:22-10.0.0.1:43556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:05.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.104:22-10.0.0.1:43556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:05.702000 audit[3510]: USER_ACCT pid=3510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:05.711581 kernel: audit: type=1101 audit(1719332705.702:526): pid=3510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:05.711736 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 43556 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:25:05.710000 audit[3510]: CRED_ACQ pid=3510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:05.725672 kernel: audit: type=1103 audit(1719332705.710:527): pid=3510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:05.725874 kernel: audit: type=1006 audit(1719332705.711:528): pid=3510 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 16:25:05.715909 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:05.711000 audit[3510]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde13433e0 a2=3 a3=7fb08af7b480 items=0 ppid=1 pid=3510 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:05.758462 systemd-logind[1278]: New session 11 of user core. Jun 25 16:25:05.802600 kernel: audit: type=1300 audit(1719332705.711:528): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde13433e0 a2=3 a3=7fb08af7b480 items=0 ppid=1 pid=3510 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:05.806449 kernel: audit: type=1327 audit(1719332705.711:528): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:05.711000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:05.830600 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:25:05.895756 kernel: audit: type=1105 audit(1719332705.860:529): pid=3510 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:05.895871 kernel: audit: type=1103 audit(1719332705.867:530): pid=3513 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:05.860000 audit[3510]: USER_START pid=3510 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:05.867000 audit[3513]: CRED_ACQ pid=3513 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:06.437954 sshd[3510]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:06.438000 audit[3510]: USER_END pid=3510 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:06.438000 audit[3510]: CRED_DISP pid=3510 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:06.800162 systemd[1]: sshd@10-10.0.0.104:22-10.0.0.1:43556.service: Deactivated successfully. Jun 25 16:25:06.801032 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:25:06.802494 systemd-logind[1278]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:25:06.807240 kernel: audit: type=1106 audit(1719332706.438:531): pid=3510 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:06.807344 kernel: audit: type=1104 audit(1719332706.438:532): pid=3510 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:06.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.104:22-10.0.0.1:43556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:06.809045 systemd-logind[1278]: Removed session 11. Jun 25 16:25:09.643000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:09.643000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000ce1740 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:25:09.643000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:25:09.643000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:09.643000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001dea920 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:25:09.643000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:25:09.744000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:09.744000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c00fbad3b0 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:25:09.744000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:25:09.744000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:09.744000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=64 a1=c008736630 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:25:09.744000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:25:09.745000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:09.745000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c0089da700 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:25:09.745000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:25:09.746000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:09.746000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=64 a1=c00fbad500 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:25:09.746000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:25:09.746000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:09.746000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c009a8f320 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:25:09.746000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:25:09.746000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:09.746000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=61 a1=c00f7076b0 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:25:09.746000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:25:11.453468 systemd[1]: Started sshd@11-10.0.0.104:22-10.0.0.1:53328.service - OpenSSH per-connection server daemon (10.0.0.1:53328). Jun 25 16:25:11.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.104:22-10.0.0.1:53328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:11.496574 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:25:11.496758 kernel: audit: type=1130 audit(1719332711.452:542): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.104:22-10.0.0.1:53328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:12.729918 containerd[1293]: time="2024-06-25T16:25:12.729861755Z" level=info msg="StopPodSandbox for \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\"" Jun 25 16:25:13.021364 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 53328 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:25:13.020000 audit[3529]: USER_ACCT pid=3529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:13.039638 kernel: audit: type=1101 audit(1719332713.020:543): pid=3529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:13.039763 kernel: audit: type=1103 audit(1719332713.022:544): pid=3529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:13.039785 kernel: audit: type=1006 audit(1719332713.022:545): pid=3529 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jun 25 16:25:13.022000 audit[3529]: CRED_ACQ pid=3529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:13.039273 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:13.040770 kernel: audit: type=1300 audit(1719332713.022:545): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7a040ec0 a2=3 a3=7fd258a27480 items=0 ppid=1 pid=3529 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:13.022000 audit[3529]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7a040ec0 a2=3 a3=7fd258a27480 items=0 ppid=1 pid=3529 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:13.047892 kernel: audit: type=1327 audit(1719332713.022:545): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:13.022000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:13.065536 systemd-logind[1278]: New session 12 of user core. Jun 25 16:25:13.132305 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:25:13.142000 audit[3529]: USER_START pid=3529 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:13.308625 kernel: audit: type=1105 audit(1719332713.142:546): pid=3529 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:13.144000 audit[3534]: CRED_ACQ pid=3534 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:13.323619 kernel: audit: type=1103 audit(1719332713.144:547): pid=3534 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:13.325675 containerd[1293]: time="2024-06-25T16:25:12.729965987Z" level=info msg="TearDown network for sandbox \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\" successfully" Jun 25 16:25:13.325675 containerd[1293]: time="2024-06-25T16:25:13.325529301Z" level=info msg="StopPodSandbox for \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\" returns successfully" Jun 25 16:25:13.326330 containerd[1293]: time="2024-06-25T16:25:13.326260959Z" level=info msg="RemovePodSandbox for \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\"" Jun 25 16:25:13.353008 containerd[1293]: time="2024-06-25T16:25:13.326305355Z" level=info msg="Forcibly stopping sandbox \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\"" Jun 25 16:25:13.353346 containerd[1293]: time="2024-06-25T16:25:13.353309687Z" level=info msg="TearDown network for sandbox \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\" successfully" Jun 25 16:25:14.040625 sshd[3529]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:14.041000 audit[3529]: USER_END pid=3529 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:14.043759 systemd[1]: sshd@11-10.0.0.104:22-10.0.0.1:53328.service: Deactivated successfully. Jun 25 16:25:14.044716 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:25:14.046003 systemd-logind[1278]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:25:14.047038 systemd-logind[1278]: Removed session 12. Jun 25 16:25:14.124105 kernel: audit: type=1106 audit(1719332714.041:548): pid=3529 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:14.124290 kernel: audit: type=1104 audit(1719332714.041:549): pid=3529 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:14.041000 audit[3529]: CRED_DISP pid=3529 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:14.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.104:22-10.0.0.1:53328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:14.375000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:14.375000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d5a880 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:25:14.375000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:25:14.375000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:14.375000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00120f160 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:25:14.375000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:25:14.376000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:14.376000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001deb0c0 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:25:14.376000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:25:14.376000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:25:14.376000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c001deb0e0 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:25:14.376000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:25:14.734742 containerd[1293]: time="2024-06-25T16:25:14.734637586Z" level=info msg="StopPodSandbox for \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\"" Jun 25 16:25:14.735276 containerd[1293]: time="2024-06-25T16:25:14.734875163Z" level=info msg="StopPodSandbox for \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\"" Jun 25 16:25:15.424950 containerd[1293]: time="2024-06-25T16:25:15.424882812Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:25:15.425470 containerd[1293]: time="2024-06-25T16:25:15.425415366Z" level=info msg="RemovePodSandbox \"93f5352477908689ec2d05062338eb73ded951d28d16f70237852310f485977f\" returns successfully" Jun 25 16:25:15.443291 containerd[1293]: time="2024-06-25T16:25:15.443216584Z" level=error msg="StopPodSandbox for \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\" failed" error="failed to destroy network for sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:15.443922 kubelet[2308]: E0625 16:25:15.443705 2308 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:25:15.443922 kubelet[2308]: E0625 16:25:15.443776 2308 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89"} Jun 25 16:25:15.443922 kubelet[2308]: E0625 16:25:15.443817 2308 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6b6ad376-38df-49e4-8a9f-dd64acf97dda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:25:15.443922 kubelet[2308]: E0625 16:25:15.443861 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6b6ad376-38df-49e4-8a9f-dd64acf97dda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xmnbk" podUID="6b6ad376-38df-49e4-8a9f-dd64acf97dda" Jun 25 16:25:15.449248 containerd[1293]: time="2024-06-25T16:25:15.449151583Z" level=error msg="StopPodSandbox for \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\" failed" error="failed to destroy network for sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:15.449528 kubelet[2308]: E0625 16:25:15.449458 2308 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:25:15.449604 kubelet[2308]: E0625 16:25:15.449541 2308 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d"} Jun 25 16:25:15.449604 kubelet[2308]: E0625 16:25:15.449588 2308 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5419bba1-6081-4f31-bcc8-616bdda728d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:25:15.449728 kubelet[2308]: E0625 16:25:15.449618 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5419bba1-6081-4f31-bcc8-616bdda728d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-27tds" podUID="5419bba1-6081-4f31-bcc8-616bdda728d4" Jun 25 16:25:15.830945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount669793990.mount: Deactivated successfully. Jun 25 16:25:17.735062 containerd[1293]: time="2024-06-25T16:25:17.735007900Z" level=info msg="StopPodSandbox for \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\"" Jun 25 16:25:17.735484 containerd[1293]: time="2024-06-25T16:25:17.735008121Z" level=info msg="StopPodSandbox for \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\"" Jun 25 16:25:17.785117 containerd[1293]: time="2024-06-25T16:25:17.784472790Z" level=error msg="StopPodSandbox for \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\" failed" error="failed to destroy network for sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:17.785306 kubelet[2308]: E0625 16:25:17.784818 2308 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:25:17.785306 kubelet[2308]: E0625 16:25:17.784899 2308 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61"} Jun 25 16:25:17.785306 kubelet[2308]: E0625 16:25:17.784944 2308 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef011844-7458-4dc5-b4b3-48140b3ba006\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:25:17.785306 kubelet[2308]: E0625 16:25:17.784976 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef011844-7458-4dc5-b4b3-48140b3ba006\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-678c89559-dn457" podUID="ef011844-7458-4dc5-b4b3-48140b3ba006" Jun 25 16:25:17.822512 containerd[1293]: time="2024-06-25T16:25:17.792075967Z" level=error msg="StopPodSandbox for \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\" failed" error="failed to destroy network for sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:17.822763 kubelet[2308]: E0625 16:25:17.822606 2308 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:25:17.822862 kubelet[2308]: E0625 16:25:17.822754 2308 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11"} Jun 25 16:25:17.822862 kubelet[2308]: E0625 16:25:17.822812 2308 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:25:17.822999 kubelet[2308]: E0625 16:25:17.822872 2308 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bw7kv" podUID="a82ce7d0-b43c-4d81-ae9f-10974dd66ff7" Jun 25 16:25:19.053048 systemd[1]: Started sshd@12-10.0.0.104:22-10.0.0.1:32782.service - OpenSSH per-connection server daemon (10.0.0.1:32782). Jun 25 16:25:19.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.104:22-10.0.0.1:32782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:19.071383 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:25:19.071586 kernel: audit: type=1130 audit(1719332719.052:555): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.104:22-10.0.0.1:32782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:19.622000 audit[3651]: USER_ACCT pid=3651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:19.624205 sshd[3651]: Accepted publickey for core from 10.0.0.1 port 32782 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:25:19.627289 sshd[3651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:19.624000 audit[3651]: CRED_ACQ pid=3651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:19.641261 kernel: audit: type=1101 audit(1719332719.622:556): pid=3651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:19.641430 kernel: audit: type=1103 audit(1719332719.624:557): pid=3651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:19.641459 kernel: audit: type=1006 audit(1719332719.624:558): pid=3651 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jun 25 16:25:19.641484 kernel: audit: type=1300 audit(1719332719.624:558): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe03431560 a2=3 a3=7f4561506480 items=0 ppid=1 pid=3651 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:19.624000 audit[3651]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe03431560 a2=3 a3=7f4561506480 items=0 ppid=1 pid=3651 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:19.646554 kernel: audit: type=1327 audit(1719332719.624:558): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:19.624000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:19.657420 systemd-logind[1278]: New session 13 of user core. Jun 25 16:25:19.664206 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:25:19.672000 audit[3651]: USER_START pid=3651 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:19.762003 kernel: audit: type=1105 audit(1719332719.672:559): pid=3651 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:19.674000 audit[3653]: CRED_ACQ pid=3653 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:19.769985 kernel: audit: type=1103 audit(1719332719.674:560): pid=3653 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:19.976780 sshd[3651]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:19.977000 audit[3651]: USER_END pid=3651 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:19.979404 systemd[1]: sshd@12-10.0.0.104:22-10.0.0.1:32782.service: Deactivated successfully. Jun 25 16:25:19.980138 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:25:19.980870 systemd-logind[1278]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:25:19.981610 systemd-logind[1278]: Removed session 13. Jun 25 16:25:20.033900 kernel: audit: type=1106 audit(1719332719.977:561): pid=3651 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:20.034043 kernel: audit: type=1104 audit(1719332719.977:562): pid=3651 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:19.977000 audit[3651]: CRED_DISP pid=3651 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:19.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.104:22-10.0.0.1:32782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:23.558776 containerd[1293]: time="2024-06-25T16:25:23.558366986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:23.712445 containerd[1293]: time="2024-06-25T16:25:23.712332552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:25:23.798425 containerd[1293]: time="2024-06-25T16:25:23.798353697Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:23.886696 containerd[1293]: time="2024-06-25T16:25:23.886593905Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:23.916168 containerd[1293]: time="2024-06-25T16:25:23.914886823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:23.916168 containerd[1293]: time="2024-06-25T16:25:23.915709165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 21.600258377s" Jun 25 16:25:23.916168 containerd[1293]: time="2024-06-25T16:25:23.915755494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:25:23.929076 containerd[1293]: time="2024-06-25T16:25:23.929028176Z" level=info msg="CreateContainer within sandbox \"b193807c2735593633ec00db180813350eac38803d4d98c896d43e0a4e6a176f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:25:24.893064 containerd[1293]: time="2024-06-25T16:25:24.892984430Z" level=info msg="CreateContainer within sandbox \"b193807c2735593633ec00db180813350eac38803d4d98c896d43e0a4e6a176f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8f335f7b8278911378a53816eebcfe447f641d519375787b991569d58fc7f95d\"" Jun 25 16:25:24.893769 containerd[1293]: time="2024-06-25T16:25:24.893732541Z" level=info msg="StartContainer for \"8f335f7b8278911378a53816eebcfe447f641d519375787b991569d58fc7f95d\"" Jun 25 16:25:24.965008 systemd[1]: Started cri-containerd-8f335f7b8278911378a53816eebcfe447f641d519375787b991569d58fc7f95d.scope - libcontainer container 8f335f7b8278911378a53816eebcfe447f641d519375787b991569d58fc7f95d. Jun 25 16:25:24.974000 audit: BPF prog-id=139 op=LOAD Jun 25 16:25:24.985065 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:25:24.985173 kernel: audit: type=1334 audit(1719332724.974:564): prog-id=139 op=LOAD Jun 25 16:25:24.974000 audit[3674]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3048 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:24.989256 kernel: audit: type=1300 audit(1719332724.974:564): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3048 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:24.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866333335663762383237383931313337386135333831366565626366 Jun 25 16:25:24.993763 kernel: audit: type=1327 audit(1719332724.974:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866333335663762383237383931313337386135333831366565626366 Jun 25 16:25:24.974000 audit: BPF prog-id=140 op=LOAD Jun 25 16:25:24.974000 audit[3674]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3048 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:24.996599 systemd[1]: Started sshd@13-10.0.0.104:22-10.0.0.1:32798.service - OpenSSH per-connection server daemon (10.0.0.1:32798). Jun 25 16:25:24.999266 kernel: audit: type=1334 audit(1719332724.974:565): prog-id=140 op=LOAD Jun 25 16:25:24.999306 kernel: audit: type=1300 audit(1719332724.974:565): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3048 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:24.999334 kernel: audit: type=1327 audit(1719332724.974:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866333335663762383237383931313337386135333831366565626366 Jun 25 16:25:24.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866333335663762383237383931313337386135333831366565626366 Jun 25 16:25:25.006897 kernel: audit: type=1334 audit(1719332724.974:566): prog-id=140 op=UNLOAD Jun 25 16:25:25.007011 kernel: audit: type=1334 audit(1719332724.974:567): prog-id=139 op=UNLOAD Jun 25 16:25:25.007034 kernel: audit: type=1334 audit(1719332724.974:568): prog-id=141 op=LOAD Jun 25 16:25:24.974000 audit: BPF prog-id=140 op=UNLOAD Jun 25 16:25:24.974000 audit: BPF prog-id=139 op=UNLOAD Jun 25 16:25:24.974000 audit: BPF prog-id=141 op=LOAD Jun 25 16:25:25.012604 kernel: audit: type=1300 audit(1719332724.974:568): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3048 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:24.974000 audit[3674]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3048 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:24.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866333335663762383237383931313337386135333831366565626366 Jun 25 16:25:24.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.104:22-10.0.0.1:32798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:25.246672 containerd[1293]: time="2024-06-25T16:25:25.246550648Z" level=info msg="StartContainer for \"8f335f7b8278911378a53816eebcfe447f641d519375787b991569d58fc7f95d\" returns successfully" Jun 25 16:25:25.312982 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:25:25.313081 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:25:25.313124 sshd[3693]: Accepted publickey for core from 10.0.0.1 port 32798 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:25:25.311000 audit[3693]: USER_ACCT pid=3693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:25.312000 audit[3693]: CRED_ACQ pid=3693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:25.312000 audit[3693]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff3099a90 a2=3 a3=7f9c11876480 items=0 ppid=1 pid=3693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:25.312000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:25.314507 sshd[3693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:25.319220 systemd-logind[1278]: New session 14 of user core. Jun 25 16:25:25.327979 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:25:25.330000 audit[3693]: USER_START pid=3693 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:25.331000 audit[3717]: CRED_ACQ pid=3717 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:25.377100 kubelet[2308]: E0625 16:25:25.376865 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:25.868932 sshd[3693]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:25.868000 audit[3693]: USER_END pid=3693 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:25.868000 audit[3693]: CRED_DISP pid=3693 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:25.871179 systemd[1]: sshd@13-10.0.0.104:22-10.0.0.1:32798.service: Deactivated successfully. Jun 25 16:25:25.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.104:22-10.0.0.1:32798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:25.871900 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:25:25.872377 systemd-logind[1278]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:25:25.873056 systemd-logind[1278]: Removed session 14. Jun 25 16:25:26.379396 kubelet[2308]: E0625 16:25:26.379362 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:26.398987 systemd[1]: run-containerd-runc-k8s.io-8f335f7b8278911378a53816eebcfe447f641d519375787b991569d58fc7f95d-runc.bJ0zlT.mount: Deactivated successfully. Jun 25 16:25:26.734584 containerd[1293]: time="2024-06-25T16:25:26.734482154Z" level=info msg="StopPodSandbox for \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\"" Jun 25 16:25:27.255224 kubelet[2308]: I0625 16:25:27.255159 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6sn6p" podStartSLOduration=6.230971196 podStartE2EDuration="46.255141556s" podCreationTimestamp="2024-06-25 16:24:41 +0000 UTC" firstStartedPulling="2024-06-25 16:24:43.893071157 +0000 UTC m=+31.237731190" lastFinishedPulling="2024-06-25 16:25:23.917241517 +0000 UTC m=+71.261901550" observedRunningTime="2024-06-25 16:25:25.419053941 +0000 UTC m=+72.763713994" watchObservedRunningTime="2024-06-25 16:25:27.255141556 +0000 UTC m=+74.599801579" Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.254 [INFO][3795] k8s.go 608: Cleaning up netns ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.255 [INFO][3795] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" iface="eth0" netns="/var/run/netns/cni-a216c901-81db-f40e-777a-7e702243c02f" Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.255 [INFO][3795] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" iface="eth0" netns="/var/run/netns/cni-a216c901-81db-f40e-777a-7e702243c02f" Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.256 [INFO][3795] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" iface="eth0" netns="/var/run/netns/cni-a216c901-81db-f40e-777a-7e702243c02f" Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.256 [INFO][3795] k8s.go 615: Releasing IP address(es) ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.256 [INFO][3795] utils.go 188: Calico CNI releasing IP address ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.364 [INFO][3803] ipam_plugin.go 411: Releasing address using handleID ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" HandleID="k8s-pod-network.c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Workload="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.364 [INFO][3803] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.364 [INFO][3803] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.633 [WARNING][3803] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" HandleID="k8s-pod-network.c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Workload="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.633 [INFO][3803] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" HandleID="k8s-pod-network.c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Workload="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.638 [INFO][3803] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:27.642116 containerd[1293]: 2024-06-25 16:25:27.640 [INFO][3795] k8s.go 621: Teardown processing complete. ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:25:27.642742 containerd[1293]: time="2024-06-25T16:25:27.642349742Z" level=info msg="TearDown network for sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\" successfully" Jun 25 16:25:27.642742 containerd[1293]: time="2024-06-25T16:25:27.642397202Z" level=info msg="StopPodSandbox for \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\" returns successfully" Jun 25 16:25:27.642817 kubelet[2308]: E0625 16:25:27.642778 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:27.643365 containerd[1293]: time="2024-06-25T16:25:27.643336546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xmnbk,Uid:6b6ad376-38df-49e4-8a9f-dd64acf97dda,Namespace:kube-system,Attempt:1,}" Jun 25 16:25:27.647600 systemd[1]: run-netns-cni\x2da216c901\x2d81db\x2df40e\x2d777a\x2d7e702243c02f.mount: Deactivated successfully. Jun 25 16:25:28.734818 containerd[1293]: time="2024-06-25T16:25:28.734768422Z" level=info msg="StopPodSandbox for \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\"" Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.550 [INFO][3839] k8s.go 608: Cleaning up netns ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.550 [INFO][3839] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" iface="eth0" netns="/var/run/netns/cni-1e9e5110-2a4a-5aa9-c061-c7abb5afd61e" Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.550 [INFO][3839] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" iface="eth0" netns="/var/run/netns/cni-1e9e5110-2a4a-5aa9-c061-c7abb5afd61e" Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.550 [INFO][3839] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" iface="eth0" netns="/var/run/netns/cni-1e9e5110-2a4a-5aa9-c061-c7abb5afd61e" Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.550 [INFO][3839] k8s.go 615: Releasing IP address(es) ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.550 [INFO][3839] utils.go 188: Calico CNI releasing IP address ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.580 [INFO][3850] ipam_plugin.go 411: Releasing address using handleID ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" HandleID="k8s-pod-network.7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Workload="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.580 [INFO][3850] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.580 [INFO][3850] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.586 [WARNING][3850] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" HandleID="k8s-pod-network.7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Workload="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.586 [INFO][3850] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" HandleID="k8s-pod-network.7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Workload="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.587 [INFO][3850] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:29.593563 containerd[1293]: 2024-06-25 16:25:29.591 [INFO][3839] k8s.go 621: Teardown processing complete. ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:25:29.595739 systemd[1]: run-netns-cni\x2d1e9e5110\x2d2a4a\x2d5aa9\x2dc061\x2dc7abb5afd61e.mount: Deactivated successfully. Jun 25 16:25:29.596816 containerd[1293]: time="2024-06-25T16:25:29.596767162Z" level=info msg="TearDown network for sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\" successfully" Jun 25 16:25:29.596816 containerd[1293]: time="2024-06-25T16:25:29.596804092Z" level=info msg="StopPodSandbox for \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\" returns successfully" Jun 25 16:25:29.597179 kubelet[2308]: E0625 16:25:29.597152 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:29.597704 containerd[1293]: time="2024-06-25T16:25:29.597674913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-27tds,Uid:5419bba1-6081-4f31-bcc8-616bdda728d4,Namespace:kube-system,Attempt:1,}" Jun 25 16:25:30.140520 systemd-networkd[1116]: califf6ba286cdd: Link UP Jun 25 16:25:30.190696 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:25:30.190894 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califf6ba286cdd: link becomes ready Jun 25 16:25:30.190956 systemd-networkd[1116]: califf6ba286cdd: Gained carrier Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:28.758 [INFO][3811] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:29.016 [INFO][3811] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0 coredns-7db6d8ff4d- kube-system 6b6ad376-38df-49e4-8a9f-dd64acf97dda 874 0 2024-06-25 16:24:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-xmnbk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf6ba286cdd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xmnbk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xmnbk-" Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:29.016 [INFO][3811] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xmnbk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:29.586 [INFO][3858] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" HandleID="k8s-pod-network.b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" Workload="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:29.597 [INFO][3858] ipam_plugin.go 264: Auto assigning IP ContainerID="b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" HandleID="k8s-pod-network.b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" Workload="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000129d90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-xmnbk", "timestamp":"2024-06-25 16:25:29.586522059 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:29.597 [INFO][3858] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:29.597 [INFO][3858] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:29.597 [INFO][3858] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:29.599 [INFO][3858] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" host="localhost" Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:29.953 [INFO][3858] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:29.998 [INFO][3858] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:30.001 [INFO][3858] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:30.003 [INFO][3858] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:30.003 [INFO][3858] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" host="localhost" Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:30.005 [INFO][3858] ipam.go 1685: Creating new handle: k8s-pod-network.b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6 Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:30.009 [INFO][3858] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" host="localhost" Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:30.132 [INFO][3858] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" host="localhost" Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:30.132 [INFO][3858] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" host="localhost" Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:30.132 [INFO][3858] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:30.205368 containerd[1293]: 2024-06-25 16:25:30.132 [INFO][3858] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" HandleID="k8s-pod-network.b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" Workload="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:25:30.206130 containerd[1293]: 2024-06-25 16:25:30.133 [INFO][3811] k8s.go 386: Populated endpoint ContainerID="b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xmnbk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6b6ad376-38df-49e4-8a9f-dd64acf97dda", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-xmnbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf6ba286cdd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:30.206130 containerd[1293]: 2024-06-25 16:25:30.134 [INFO][3811] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xmnbk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:25:30.206130 containerd[1293]: 2024-06-25 16:25:30.134 [INFO][3811] dataplane_linux.go 68: Setting the host side veth name to califf6ba286cdd ContainerID="b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xmnbk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:25:30.206130 containerd[1293]: 2024-06-25 16:25:30.191 [INFO][3811] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xmnbk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:25:30.206130 containerd[1293]: 2024-06-25 16:25:30.192 [INFO][3811] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xmnbk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6b6ad376-38df-49e4-8a9f-dd64acf97dda", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6", Pod:"coredns-7db6d8ff4d-xmnbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf6ba286cdd", MAC:"e6:48:0d:4c:4b:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:30.206130 containerd[1293]: 2024-06-25 16:25:30.201 [INFO][3811] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xmnbk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:25:30.521079 containerd[1293]: time="2024-06-25T16:25:30.520904049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:30.521079 containerd[1293]: time="2024-06-25T16:25:30.520970546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:30.521079 containerd[1293]: time="2024-06-25T16:25:30.520994882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:30.521079 containerd[1293]: time="2024-06-25T16:25:30.521005522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:30.538072 systemd[1]: run-containerd-runc-k8s.io-b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6-runc.OfpJPr.mount: Deactivated successfully. Jun 25 16:25:30.549116 systemd[1]: Started cri-containerd-b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6.scope - libcontainer container b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6. Jun 25 16:25:30.557000 audit: BPF prog-id=142 op=LOAD Jun 25 16:25:30.613433 kernel: kauditd_printk_skb: 12 callbacks suppressed Jun 25 16:25:30.613654 kernel: audit: type=1334 audit(1719332730.557:578): prog-id=142 op=LOAD Jun 25 16:25:30.613531 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:25:30.573000 audit: BPF prog-id=143 op=LOAD Jun 25 16:25:30.621664 kernel: audit: type=1334 audit(1719332730.573:579): prog-id=143 op=LOAD Jun 25 16:25:30.621754 kernel: audit: type=1300 audit(1719332730.573:579): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3898 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.573000 audit[3908]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3898 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237303738653636643964396263633734386461353966643630633234 Jun 25 16:25:30.739886 kernel: audit: type=1327 audit(1719332730.573:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237303738653636643964396263633734386461353966643630633234 Jun 25 16:25:30.740088 kernel: audit: type=1334 audit(1719332730.573:580): prog-id=144 op=LOAD Jun 25 16:25:30.573000 audit: BPF prog-id=144 op=LOAD Jun 25 16:25:30.573000 audit[3908]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3898 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.745457 kernel: audit: type=1300 audit(1719332730.573:580): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3898 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.745630 kernel: audit: type=1327 audit(1719332730.573:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237303738653636643964396263633734386461353966643630633234 Jun 25 16:25:30.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237303738653636643964396263633734386461353966643630633234 Jun 25 16:25:30.573000 audit: BPF prog-id=144 op=UNLOAD Jun 25 16:25:30.842297 kernel: audit: type=1334 audit(1719332730.573:581): prog-id=144 op=UNLOAD Jun 25 16:25:30.573000 audit: BPF prog-id=143 op=UNLOAD Jun 25 16:25:30.843714 kernel: audit: type=1334 audit(1719332730.573:582): prog-id=143 op=UNLOAD Jun 25 16:25:30.843918 kernel: audit: type=1334 audit(1719332730.573:583): prog-id=145 op=LOAD Jun 25 16:25:30.573000 audit: BPF prog-id=145 op=LOAD Jun 25 16:25:30.573000 audit[3908]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3898 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237303738653636643964396263633734386461353966643630633234 Jun 25 16:25:30.602000 audit[3956]: AVC avc: denied { write } for pid=3956 comm="tee" name="fd" dev="proc" ino=28849 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:30.602000 audit[3956]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff7a397a2f a2=241 a3=1b6 items=1 ppid=3943 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.602000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:25:30.602000 audit: PATH item=0 name="/dev/fd/63" inode=27834 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:30.602000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:30.651000 audit[3982]: AVC avc: denied { write } for pid=3982 comm="tee" name="fd" dev="proc" ino=27869 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:30.651000 audit[3982]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffebc7ada2e a2=241 a3=1b6 items=1 ppid=3934 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.651000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:25:30.651000 audit: PATH item=0 name="/dev/fd/63" inode=27851 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:30.651000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:30.657000 audit[3997]: AVC avc: denied { write } for pid=3997 comm="tee" name="fd" dev="proc" ino=28882 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:30.657000 audit[3997]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd86b53a30 a2=241 a3=1b6 items=1 ppid=3939 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.657000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:25:30.657000 audit: PATH item=0 name="/dev/fd/63" inode=27864 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:30.657000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:30.666000 audit[4014]: AVC avc: denied { write } for pid=4014 comm="tee" name="fd" dev="proc" ino=27878 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:30.666000 audit[4014]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf2df6a1f a2=241 a3=1b6 items=1 ppid=3960 pid=4014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.666000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:25:30.666000 audit: PATH item=0 name="/dev/fd/63" inode=27875 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:30.666000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:30.671000 audit[4017]: AVC avc: denied { write } for pid=4017 comm="tee" name="fd" dev="proc" ino=28893 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:30.671000 audit[4017]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe44445a1e a2=241 a3=1b6 items=1 ppid=3933 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.671000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:25:30.671000 audit: PATH item=0 name="/dev/fd/63" inode=28890 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:30.671000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:30.674000 audit[4011]: AVC avc: denied { write } for pid=4011 comm="tee" name="fd" dev="proc" ino=28897 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:30.674000 audit[4011]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffecfd3da2e a2=241 a3=1b6 items=1 ppid=3947 pid=4011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.674000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:25:30.674000 audit: PATH item=0 name="/dev/fd/63" inode=27006 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:30.674000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:30.715000 audit[4021]: AVC avc: denied { write } for pid=4021 comm="tee" name="fd" dev="proc" ino=27895 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:30.715000 audit[4021]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff2f010a2e a2=241 a3=1b6 items=1 ppid=3940 pid=4021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:30.715000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:25:30.715000 audit: PATH item=0 name="/dev/fd/63" inode=28899 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:30.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:30.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.104:22-10.0.0.1:46480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:30.879570 systemd[1]: Started sshd@14-10.0.0.104:22-10.0.0.1:46480.service - OpenSSH per-connection server daemon (10.0.0.1:46480). Jun 25 16:25:30.923305 containerd[1293]: time="2024-06-25T16:25:30.916188979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xmnbk,Uid:6b6ad376-38df-49e4-8a9f-dd64acf97dda,Namespace:kube-system,Attempt:1,} returns sandbox id \"b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6\"" Jun 25 16:25:30.923305 containerd[1293]: time="2024-06-25T16:25:30.920169740Z" level=info msg="CreateContainer within sandbox \"b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:25:30.923379 kubelet[2308]: E0625 16:25:30.916909 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:31.181000 audit[4029]: USER_ACCT pid=4029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:31.182471 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 46480 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:25:31.182000 audit[4029]: CRED_ACQ pid=4029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:31.182000 audit[4029]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffced8008b0 a2=3 a3=7f8200333480 items=0 ppid=1 pid=4029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:31.182000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:31.196000 audit[4029]: USER_START pid=4029 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:31.198000 audit[4077]: CRED_ACQ pid=4077 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:31.190243 systemd-logind[1278]: New session 15 of user core. Jun 25 16:25:31.184031 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:31.192982 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:25:31.231164 systemd-networkd[1116]: vxlan.calico: Link UP Jun 25 16:25:31.231167 systemd-networkd[1116]: vxlan.calico: Gained carrier Jun 25 16:25:31.241000 audit: BPF prog-id=146 op=LOAD Jun 25 16:25:31.241000 audit[4094]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc5beaa2f0 a2=70 a3=7fe159d7a000 items=0 ppid=3935 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:31.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:25:31.241000 audit: BPF prog-id=146 op=UNLOAD Jun 25 16:25:31.241000 audit: BPF prog-id=147 op=LOAD Jun 25 16:25:31.241000 audit[4094]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc5beaa2f0 a2=70 a3=6f items=0 ppid=3935 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:31.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:25:31.241000 audit: BPF prog-id=147 op=UNLOAD Jun 25 16:25:31.241000 audit: BPF prog-id=148 op=LOAD Jun 25 16:25:31.241000 audit[4094]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc5beaa280 a2=70 a3=7ffc5beaa2f0 items=0 ppid=3935 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:31.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:25:31.241000 audit: BPF prog-id=148 op=UNLOAD Jun 25 16:25:31.241000 audit: BPF prog-id=149 op=LOAD Jun 25 16:25:31.241000 audit[4094]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5beaa2b0 a2=70 a3=0 items=0 ppid=3935 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:31.241000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:25:31.252000 audit: BPF prog-id=149 op=UNLOAD Jun 25 16:25:31.252000 audit[1109]: SYSCALL arch=c000003e syscall=56 success=yes exit=4106 a0=1200011 a1=0 a2=0 a3=7f0b572d6c90 items=0 ppid=1 pid=1109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-udevd" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:31.252000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-udevd" Jun 25 16:25:31.252629 systemd-networkd[1116]: calico_tmp_B: Failed to manage SR-IOV PF and VF ports, ignoring: Invalid argument Jun 25 16:25:31.542000 audit[4132]: NETFILTER_CFG table=raw:97 family=2 entries=19 op=nft_register_chain pid=4132 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:31.542000 audit[4132]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffd5a3449b0 a2=0 a3=7ffd5a34499c items=0 ppid=3935 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:31.542000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:31.542000 audit[4134]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=4134 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:31.542000 audit[4134]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffedc88b090 a2=0 a3=7ffedc88b07c items=0 ppid=3935 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:31.542000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:31.542000 audit[4133]: NETFILTER_CFG table=mangle:99 family=2 entries=16 op=nft_register_chain pid=4133 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:31.542000 audit[4133]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffeaa671a20 a2=0 a3=7ffeaa671a0c items=0 ppid=3935 pid=4133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:31.542000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:31.546000 audit[4137]: NETFILTER_CFG table=filter:100 family=2 entries=69 op=nft_register_chain pid=4137 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:31.546000 audit[4137]: SYSCALL arch=c000003e syscall=46 success=yes exit=36404 a0=3 a1=7ffea94a4460 a2=0 a3=7ffea94a444c items=0 ppid=3935 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:31.546000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:31.735487 containerd[1293]: time="2024-06-25T16:25:31.735310064Z" level=info msg="StopPodSandbox for \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\"" Jun 25 16:25:31.740473 sshd[4029]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:31.741000 audit[4029]: USER_END pid=4029 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:31.741000 audit[4029]: CRED_DISP pid=4029 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:31.743684 systemd[1]: sshd@14-10.0.0.104:22-10.0.0.1:46480.service: Deactivated successfully. Jun 25 16:25:31.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.104:22-10.0.0.1:46480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:31.744526 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:25:31.745281 systemd-logind[1278]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:25:31.746153 systemd-logind[1278]: Removed session 15. Jun 25 16:25:31.770663 systemd-networkd[1116]: califf6ba286cdd: Gained IPv6LL Jun 25 16:25:32.285872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676037055.mount: Deactivated successfully. Jun 25 16:25:32.292474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432082072.mount: Deactivated successfully. Jun 25 16:25:32.410026 systemd-networkd[1116]: vxlan.calico: Gained IPv6LL Jun 25 16:25:33.004765 containerd[1293]: time="2024-06-25T16:25:33.004684075Z" level=info msg="CreateContainer within sandbox \"b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b522759c7a552d1453070621adb5db4ab428bd3ad0ce53eef55f47745cfcb23\"" Jun 25 16:25:33.006484 containerd[1293]: time="2024-06-25T16:25:33.005226047Z" level=info msg="StartContainer for \"3b522759c7a552d1453070621adb5db4ab428bd3ad0ce53eef55f47745cfcb23\"" Jun 25 16:25:33.028983 systemd[1]: Started cri-containerd-3b522759c7a552d1453070621adb5db4ab428bd3ad0ce53eef55f47745cfcb23.scope - libcontainer container 3b522759c7a552d1453070621adb5db4ab428bd3ad0ce53eef55f47745cfcb23. Jun 25 16:25:33.061000 audit: BPF prog-id=150 op=LOAD Jun 25 16:25:33.061000 audit: BPF prog-id=151 op=LOAD Jun 25 16:25:33.061000 audit[4213]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3898 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:33.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353232373539633761353532643134353330373036323161646235 Jun 25 16:25:33.061000 audit: BPF prog-id=152 op=LOAD Jun 25 16:25:33.061000 audit[4213]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3898 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:33.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353232373539633761353532643134353330373036323161646235 Jun 25 16:25:33.061000 audit: BPF prog-id=152 op=UNLOAD Jun 25 16:25:33.061000 audit: BPF prog-id=151 op=UNLOAD Jun 25 16:25:33.062000 audit: BPF prog-id=153 op=LOAD Jun 25 16:25:33.062000 audit[4213]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3898 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:33.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353232373539633761353532643134353330373036323161646235 Jun 25 16:25:33.170975 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:25:33.171080 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic8d2492dd75: link becomes ready Jun 25 16:25:33.169295 systemd-networkd[1116]: calic8d2492dd75: Link UP Jun 25 16:25:33.170369 systemd-networkd[1116]: calic8d2492dd75: Gained carrier Jun 25 16:25:33.228656 containerd[1293]: time="2024-06-25T16:25:33.228572882Z" level=info msg="StartContainer for \"3b522759c7a552d1453070621adb5db4ab428bd3ad0ce53eef55f47745cfcb23\" returns successfully" Jun 25 16:25:33.395511 kubelet[2308]: E0625 16:25:33.395480 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:31.623 [INFO][4141] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--27tds-eth0 coredns-7db6d8ff4d- kube-system 5419bba1-6081-4f31-bcc8-616bdda728d4 892 0 2024-06-25 16:24:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-27tds eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic8d2492dd75 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27tds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27tds-" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:31.624 [INFO][4141] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27tds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:31.721 [INFO][4157] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" HandleID="k8s-pod-network.906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" Workload="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:31.823 [INFO][4157] ipam_plugin.go 264: Auto assigning IP ContainerID="906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" HandleID="k8s-pod-network.906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" Workload="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051330), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-27tds", "timestamp":"2024-06-25 16:25:31.721322965 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:31.823 [INFO][4157] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:31.823 [INFO][4157] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:31.823 [INFO][4157] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:32.439 [INFO][4157] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" host="localhost" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:33.113 [INFO][4157] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:33.116 [INFO][4157] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:33.117 [INFO][4157] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:33.119 [INFO][4157] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:33.119 [INFO][4157] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" host="localhost" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:33.120 [INFO][4157] ipam.go 1685: Creating new handle: k8s-pod-network.906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:33.122 [INFO][4157] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" host="localhost" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:33.163 [INFO][4157] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" host="localhost" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:33.163 [INFO][4157] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" host="localhost" Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:33.164 [INFO][4157] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:33.451118 containerd[1293]: 2024-06-25 16:25:33.164 [INFO][4157] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" HandleID="k8s-pod-network.906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" Workload="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:25:33.451945 containerd[1293]: 2024-06-25 16:25:33.166 [INFO][4141] k8s.go 386: Populated endpoint ContainerID="906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27tds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--27tds-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5419bba1-6081-4f31-bcc8-616bdda728d4", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-27tds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8d2492dd75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:33.451945 containerd[1293]: 2024-06-25 16:25:33.166 [INFO][4141] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27tds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:25:33.451945 containerd[1293]: 2024-06-25 16:25:33.166 [INFO][4141] dataplane_linux.go 68: Setting the host side veth name to calic8d2492dd75 ContainerID="906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27tds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:25:33.451945 containerd[1293]: 2024-06-25 16:25:33.170 [INFO][4141] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27tds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:25:33.451945 containerd[1293]: 2024-06-25 16:25:33.171 [INFO][4141] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27tds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--27tds-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5419bba1-6081-4f31-bcc8-616bdda728d4", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c", Pod:"coredns-7db6d8ff4d-27tds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8d2492dd75", MAC:"fa:04:ea:0a:0f:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:33.451945 containerd[1293]: 2024-06-25 16:25:33.449 [INFO][4141] k8s.go 500: Wrote updated endpoint to datastore ContainerID="906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27tds" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:25:33.461000 audit[4255]: NETFILTER_CFG table=filter:101 family=2 entries=30 op=nft_register_chain pid=4255 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:33.461000 audit[4255]: SYSCALL arch=c000003e syscall=46 success=yes exit=17032 a0=3 a1=7fffb6296c60 a2=0 a3=7fffb6296c4c items=0 ppid=3935 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:33.461000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:32.007 [INFO][4189] k8s.go 608: Cleaning up netns ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:32.007 [INFO][4189] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" iface="eth0" netns="/var/run/netns/cni-eee6c9fa-a886-d84a-d3c6-5246dd1ce16b" Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:32.007 [INFO][4189] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" iface="eth0" netns="/var/run/netns/cni-eee6c9fa-a886-d84a-d3c6-5246dd1ce16b" Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:32.008 [INFO][4189] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" iface="eth0" netns="/var/run/netns/cni-eee6c9fa-a886-d84a-d3c6-5246dd1ce16b" Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:32.008 [INFO][4189] k8s.go 615: Releasing IP address(es) ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:32.008 [INFO][4189] utils.go 188: Calico CNI releasing IP address ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:32.081 [INFO][4197] ipam_plugin.go 411: Releasing address using handleID ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" HandleID="k8s-pod-network.ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Workload="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:32.081 [INFO][4197] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:33.164 [INFO][4197] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:33.448 [WARNING][4197] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" HandleID="k8s-pod-network.ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Workload="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:33.448 [INFO][4197] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" HandleID="k8s-pod-network.ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Workload="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:33.484 [INFO][4197] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:33.488009 containerd[1293]: 2024-06-25 16:25:33.486 [INFO][4189] k8s.go 621: Teardown processing complete. ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:25:33.491081 systemd[1]: run-netns-cni\x2deee6c9fa\x2da886\x2dd84a\x2dd3c6\x2d5246dd1ce16b.mount: Deactivated successfully. Jun 25 16:25:33.491808 containerd[1293]: time="2024-06-25T16:25:33.491753466Z" level=info msg="TearDown network for sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\" successfully" Jun 25 16:25:33.491918 containerd[1293]: time="2024-06-25T16:25:33.491808812Z" level=info msg="StopPodSandbox for \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\" returns successfully" Jun 25 16:25:33.492535 containerd[1293]: time="2024-06-25T16:25:33.492506780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-678c89559-dn457,Uid:ef011844-7458-4dc5-b4b3-48140b3ba006,Namespace:calico-system,Attempt:1,}" Jun 25 16:25:33.655844 containerd[1293]: time="2024-06-25T16:25:33.655590991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:33.655844 containerd[1293]: time="2024-06-25T16:25:33.655689679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:33.655844 containerd[1293]: time="2024-06-25T16:25:33.655733402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:33.655844 containerd[1293]: time="2024-06-25T16:25:33.655753229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:33.678087 systemd[1]: Started cri-containerd-906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c.scope - libcontainer container 906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c. Jun 25 16:25:33.687000 audit: BPF prog-id=154 op=LOAD Jun 25 16:25:33.688000 audit: BPF prog-id=155 op=LOAD Jun 25 16:25:33.688000 audit[4273]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=4264 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:33.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930363830376562633232343236643265323664363137306638636632 Jun 25 16:25:33.688000 audit: BPF prog-id=156 op=LOAD Jun 25 16:25:33.688000 audit[4273]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=4264 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:33.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930363830376562633232343236643265323664363137306638636632 Jun 25 16:25:33.688000 audit: BPF prog-id=156 op=UNLOAD Jun 25 16:25:33.688000 audit: BPF prog-id=155 op=UNLOAD Jun 25 16:25:33.688000 audit: BPF prog-id=157 op=LOAD Jun 25 16:25:33.688000 audit[4273]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=4264 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:33.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930363830376562633232343236643265323664363137306638636632 Jun 25 16:25:33.689797 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:25:33.715560 containerd[1293]: time="2024-06-25T16:25:33.715505533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-27tds,Uid:5419bba1-6081-4f31-bcc8-616bdda728d4,Namespace:kube-system,Attempt:1,} returns sandbox id \"906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c\"" Jun 25 16:25:33.716677 kubelet[2308]: E0625 16:25:33.716644 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:33.719077 containerd[1293]: time="2024-06-25T16:25:33.719042358Z" level=info msg="CreateContainer within sandbox \"906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:25:33.730003 kubelet[2308]: I0625 16:25:33.728887 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xmnbk" podStartSLOduration=66.728865015 podStartE2EDuration="1m6.728865015s" podCreationTimestamp="2024-06-25 16:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:25:33.728648592 +0000 UTC m=+81.073308625" watchObservedRunningTime="2024-06-25 16:25:33.728865015 +0000 UTC m=+81.073525058" Jun 25 16:25:33.735440 containerd[1293]: time="2024-06-25T16:25:33.735131879Z" level=info msg="StopPodSandbox for \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\"" Jun 25 16:25:34.048000 audit[4323]: NETFILTER_CFG table=filter:102 family=2 entries=14 op=nft_register_rule pid=4323 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:34.048000 audit[4323]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffcb250ebf0 a2=0 a3=7ffcb250ebdc items=0 ppid=2514 pid=4323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:34.048000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:34.049000 audit[4323]: NETFILTER_CFG table=nat:103 family=2 entries=14 op=nft_register_rule pid=4323 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:34.049000 audit[4323]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcb250ebf0 a2=0 a3=0 items=0 ppid=2514 pid=4323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:34.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:34.399513 kubelet[2308]: E0625 16:25:34.398707 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.352 [INFO][4312] k8s.go 608: Cleaning up netns ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.353 [INFO][4312] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" iface="eth0" netns="/var/run/netns/cni-a8d3f23c-f170-8aab-6cf6-fd4aa82e69a8" Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.353 [INFO][4312] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" iface="eth0" netns="/var/run/netns/cni-a8d3f23c-f170-8aab-6cf6-fd4aa82e69a8" Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.353 [INFO][4312] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" iface="eth0" netns="/var/run/netns/cni-a8d3f23c-f170-8aab-6cf6-fd4aa82e69a8" Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.353 [INFO][4312] k8s.go 615: Releasing IP address(es) ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.353 [INFO][4312] utils.go 188: Calico CNI releasing IP address ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.394 [INFO][4324] ipam_plugin.go 411: Releasing address using handleID ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" HandleID="k8s-pod-network.72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Workload="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.394 [INFO][4324] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.394 [INFO][4324] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.439 [WARNING][4324] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" HandleID="k8s-pod-network.72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Workload="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.439 [INFO][4324] ipam_plugin.go 439: Releasing address using workloadID ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" HandleID="k8s-pod-network.72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Workload="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.552 [INFO][4324] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:34.560917 containerd[1293]: 2024-06-25 16:25:34.559 [INFO][4312] k8s.go 621: Teardown processing complete. ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:25:34.563000 audit[4345]: NETFILTER_CFG table=filter:104 family=2 entries=11 op=nft_register_rule pid=4345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:34.563000 audit[4345]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe92c446c0 a2=0 a3=7ffe92c446ac items=0 ppid=2514 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:34.563000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:34.565049 containerd[1293]: time="2024-06-25T16:25:34.561095749Z" level=info msg="TearDown network for sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\" successfully" Jun 25 16:25:34.565049 containerd[1293]: time="2024-06-25T16:25:34.561126879Z" level=info msg="StopPodSandbox for \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\" returns successfully" Jun 25 16:25:34.565049 containerd[1293]: time="2024-06-25T16:25:34.561739163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bw7kv,Uid:a82ce7d0-b43c-4d81-ae9f-10974dd66ff7,Namespace:calico-system,Attempt:1,}" Jun 25 16:25:34.563481 systemd[1]: run-netns-cni\x2da8d3f23c\x2df170\x2d8aab\x2d6cf6\x2dfd4aa82e69a8.mount: Deactivated successfully. Jun 25 16:25:34.566000 audit[4345]: NETFILTER_CFG table=nat:105 family=2 entries=35 op=nft_register_chain pid=4345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:34.566000 audit[4345]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe92c446c0 a2=0 a3=7ffe92c446ac items=0 ppid=2514 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:34.566000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:34.843818 systemd-networkd[1116]: calic8d2492dd75: Gained IPv6LL Jun 25 16:25:34.905036 containerd[1293]: time="2024-06-25T16:25:34.904966341Z" level=info msg="CreateContainer within sandbox \"906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3e38069279d68705b234d0f05add8c824f62c98070a2fb8b81f0601c42d7180\"" Jun 25 16:25:34.905519 containerd[1293]: time="2024-06-25T16:25:34.905483916Z" level=info msg="StartContainer for \"b3e38069279d68705b234d0f05add8c824f62c98070a2fb8b81f0601c42d7180\"" Jun 25 16:25:34.933001 systemd[1]: Started cri-containerd-b3e38069279d68705b234d0f05add8c824f62c98070a2fb8b81f0601c42d7180.scope - libcontainer container b3e38069279d68705b234d0f05add8c824f62c98070a2fb8b81f0601c42d7180. Jun 25 16:25:34.942000 audit: BPF prog-id=158 op=LOAD Jun 25 16:25:34.943000 audit: BPF prog-id=159 op=LOAD Jun 25 16:25:34.943000 audit[4358]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4264 pid=4358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:34.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233653338303639323739643638373035623233346430663035616464 Jun 25 16:25:34.943000 audit: BPF prog-id=160 op=LOAD Jun 25 16:25:34.943000 audit[4358]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4264 pid=4358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:34.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233653338303639323739643638373035623233346430663035616464 Jun 25 16:25:34.943000 audit: BPF prog-id=160 op=UNLOAD Jun 25 16:25:34.943000 audit: BPF prog-id=159 op=UNLOAD Jun 25 16:25:34.943000 audit: BPF prog-id=161 op=LOAD Jun 25 16:25:34.943000 audit[4358]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4264 pid=4358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:34.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233653338303639323739643638373035623233346430663035616464 Jun 25 16:25:35.317598 containerd[1293]: time="2024-06-25T16:25:35.317455730Z" level=info msg="StartContainer for \"b3e38069279d68705b234d0f05add8c824f62c98070a2fb8b81f0601c42d7180\" returns successfully" Jun 25 16:25:35.401919 kubelet[2308]: E0625 16:25:35.401669 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:35.401919 kubelet[2308]: E0625 16:25:35.401780 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:35.924000 audit[4412]: NETFILTER_CFG table=filter:106 family=2 entries=8 op=nft_register_rule pid=4412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:35.940852 kernel: kauditd_printk_skb: 129 callbacks suppressed Jun 25 16:25:35.941051 kernel: audit: type=1325 audit(1719332735.924:635): table=filter:106 family=2 entries=8 op=nft_register_rule pid=4412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:35.924000 audit[4412]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffce20855a0 a2=0 a3=7ffce208558c items=0 ppid=2514 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:35.991252 kernel: audit: type=1300 audit(1719332735.924:635): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffce20855a0 a2=0 a3=7ffce208558c items=0 ppid=2514 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:35.991428 kernel: audit: type=1327 audit(1719332735.924:635): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:35.924000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:35.993000 audit[4412]: NETFILTER_CFG table=nat:107 family=2 entries=44 op=nft_register_rule pid=4412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:35.993000 audit[4412]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffce20855a0 a2=0 a3=7ffce208558c items=0 ppid=2514 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.081263 kernel: audit: type=1325 audit(1719332735.993:636): table=nat:107 family=2 entries=44 op=nft_register_rule pid=4412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:36.081420 kernel: audit: type=1300 audit(1719332735.993:636): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffce20855a0 a2=0 a3=7ffce208558c items=0 ppid=2514 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.081449 kernel: audit: type=1327 audit(1719332735.993:636): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:35.993000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:36.292364 systemd-networkd[1116]: calie89ac64ba5c: Link UP Jun 25 16:25:36.331613 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:25:36.331933 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie89ac64ba5c: link becomes ready Jun 25 16:25:36.332080 systemd-networkd[1116]: calie89ac64ba5c: Gained carrier Jun 25 16:25:36.343792 kubelet[2308]: I0625 16:25:36.343451 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-27tds" podStartSLOduration=69.343427437 podStartE2EDuration="1m9.343427437s" podCreationTimestamp="2024-06-25 16:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:25:35.531887992 +0000 UTC m=+82.876548045" watchObservedRunningTime="2024-06-25 16:25:36.343427437 +0000 UTC m=+83.688087471" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:35.328 [INFO][4332] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0 calico-kube-controllers-678c89559- calico-system ef011844-7458-4dc5-b4b3-48140b3ba006 908 0 2024-06-25 16:24:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:678c89559 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-678c89559-dn457 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie89ac64ba5c [] []}} ContainerID="6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" Namespace="calico-system" Pod="calico-kube-controllers-678c89559-dn457" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678c89559--dn457-" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:35.328 [INFO][4332] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" Namespace="calico-system" Pod="calico-kube-controllers-678c89559-dn457" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:35.644 [INFO][4390] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" HandleID="k8s-pod-network.6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" Workload="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:35.832 [INFO][4390] ipam_plugin.go 264: Auto assigning IP ContainerID="6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" HandleID="k8s-pod-network.6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" Workload="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-678c89559-dn457", "timestamp":"2024-06-25 16:25:35.644555487 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:35.833 [INFO][4390] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:35.833 [INFO][4390] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:35.833 [INFO][4390] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:35.841 [INFO][4390] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" host="localhost" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:36.082 [INFO][4390] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:36.089 [INFO][4390] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:36.091 [INFO][4390] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:36.093 [INFO][4390] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:36.093 [INFO][4390] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" host="localhost" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:36.094 [INFO][4390] ipam.go 1685: Creating new handle: k8s-pod-network.6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89 Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:36.097 [INFO][4390] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" host="localhost" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:36.282 [INFO][4390] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" host="localhost" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:36.282 [INFO][4390] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" host="localhost" Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:36.282 [INFO][4390] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:36.345928 containerd[1293]: 2024-06-25 16:25:36.282 [INFO][4390] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" HandleID="k8s-pod-network.6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" Workload="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:25:36.346974 containerd[1293]: 2024-06-25 16:25:36.286 [INFO][4332] k8s.go 386: Populated endpoint ContainerID="6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" Namespace="calico-system" Pod="calico-kube-controllers-678c89559-dn457" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0", GenerateName:"calico-kube-controllers-678c89559-", Namespace:"calico-system", SelfLink:"", UID:"ef011844-7458-4dc5-b4b3-48140b3ba006", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"678c89559", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-678c89559-dn457", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie89ac64ba5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:36.346974 containerd[1293]: 2024-06-25 16:25:36.287 [INFO][4332] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" Namespace="calico-system" Pod="calico-kube-controllers-678c89559-dn457" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:25:36.346974 containerd[1293]: 2024-06-25 16:25:36.287 [INFO][4332] dataplane_linux.go 68: Setting the host side veth name to calie89ac64ba5c ContainerID="6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" Namespace="calico-system" Pod="calico-kube-controllers-678c89559-dn457" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:25:36.346974 containerd[1293]: 2024-06-25 16:25:36.332 [INFO][4332] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" Namespace="calico-system" Pod="calico-kube-controllers-678c89559-dn457" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:25:36.346974 containerd[1293]: 2024-06-25 16:25:36.332 [INFO][4332] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" Namespace="calico-system" Pod="calico-kube-controllers-678c89559-dn457" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0", GenerateName:"calico-kube-controllers-678c89559-", Namespace:"calico-system", SelfLink:"", UID:"ef011844-7458-4dc5-b4b3-48140b3ba006", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"678c89559", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89", Pod:"calico-kube-controllers-678c89559-dn457", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie89ac64ba5c", MAC:"8a:37:f2:e2:9f:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:36.346974 containerd[1293]: 2024-06-25 16:25:36.342 [INFO][4332] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89" Namespace="calico-system" Pod="calico-kube-controllers-678c89559-dn457" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:25:36.355000 audit[4438]: NETFILTER_CFG table=filter:108 family=2 entries=42 op=nft_register_chain pid=4438 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:36.355000 audit[4438]: SYSCALL arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7fff39b35820 a2=0 a3=7fff39b3580c items=0 ppid=3935 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.401900 kernel: audit: type=1325 audit(1719332736.355:637): table=filter:108 family=2 entries=42 op=nft_register_chain pid=4438 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:36.402055 kernel: audit: type=1300 audit(1719332736.355:637): arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7fff39b35820 a2=0 a3=7fff39b3580c items=0 ppid=3935 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.402086 kernel: audit: type=1327 audit(1719332736.355:637): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:36.355000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:36.404673 kubelet[2308]: E0625 16:25:36.404643 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:36.591133 containerd[1293]: time="2024-06-25T16:25:36.591021268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:36.591133 containerd[1293]: time="2024-06-25T16:25:36.591095268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:36.591133 containerd[1293]: time="2024-06-25T16:25:36.591108854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:36.591133 containerd[1293]: time="2024-06-25T16:25:36.591117781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:36.618005 systemd[1]: Started cri-containerd-6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89.scope - libcontainer container 6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89. Jun 25 16:25:36.626000 audit: BPF prog-id=162 op=LOAD Jun 25 16:25:36.628845 kernel: audit: type=1334 audit(1719332736.626:638): prog-id=162 op=LOAD Jun 25 16:25:36.628000 audit: BPF prog-id=163 op=LOAD Jun 25 16:25:36.628000 audit[4463]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4453 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303366613539383762326231326533373530363333623639303731 Jun 25 16:25:36.628000 audit: BPF prog-id=164 op=LOAD Jun 25 16:25:36.628000 audit[4463]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4453 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303366613539383762326231326533373530363333623639303731 Jun 25 16:25:36.628000 audit: BPF prog-id=164 op=UNLOAD Jun 25 16:25:36.628000 audit: BPF prog-id=163 op=UNLOAD Jun 25 16:25:36.628000 audit: BPF prog-id=165 op=LOAD Jun 25 16:25:36.628000 audit[4463]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4453 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303366613539383762326231326533373530363333623639303731 Jun 25 16:25:36.630580 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:25:36.664129 containerd[1293]: time="2024-06-25T16:25:36.658048092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-678c89559-dn457,Uid:ef011844-7458-4dc5-b4b3-48140b3ba006,Namespace:calico-system,Attempt:1,} returns sandbox id \"6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89\"" Jun 25 16:25:36.665688 containerd[1293]: time="2024-06-25T16:25:36.665660674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:25:36.763063 systemd[1]: Started sshd@15-10.0.0.104:22-10.0.0.1:57730.service - OpenSSH per-connection server daemon (10.0.0.1:57730). Jun 25 16:25:36.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.104:22-10.0.0.1:57730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:36.798000 audit[4486]: USER_ACCT pid=4486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:36.799979 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 57730 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:25:36.799000 audit[4486]: CRED_ACQ pid=4486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:36.799000 audit[4486]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc274d9cc0 a2=3 a3=7f3c57f72480 items=0 ppid=1 pid=4486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.799000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:36.801054 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:36.804795 systemd-logind[1278]: New session 16 of user core. Jun 25 16:25:36.814104 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:25:36.817000 audit[4486]: USER_START pid=4486 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:36.818000 audit[4488]: CRED_ACQ pid=4488 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:36.841000 audit[4490]: NETFILTER_CFG table=filter:109 family=2 entries=8 op=nft_register_rule pid=4490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:36.841000 audit[4490]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc3957be50 a2=0 a3=7ffc3957be3c items=0 ppid=2514 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:36.856000 audit[4490]: NETFILTER_CFG table=nat:110 family=2 entries=56 op=nft_register_chain pid=4490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:36.856000 audit[4490]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc3957be50 a2=0 a3=7ffc3957be3c items=0 ppid=2514 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:36.931796 sshd[4486]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:36.932000 audit[4486]: USER_END pid=4486 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:36.932000 audit[4486]: CRED_DISP pid=4486 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:36.940324 systemd[1]: sshd@15-10.0.0.104:22-10.0.0.1:57730.service: Deactivated successfully. Jun 25 16:25:36.940897 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:25:36.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.104:22-10.0.0.1:57730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:36.941462 systemd-logind[1278]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:25:36.942803 systemd[1]: Started sshd@16-10.0.0.104:22-10.0.0.1:57744.service - OpenSSH per-connection server daemon (10.0.0.1:57744). Jun 25 16:25:36.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.104:22-10.0.0.1:57744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:36.943558 systemd-logind[1278]: Removed session 16. Jun 25 16:25:36.976000 audit[4504]: USER_ACCT pid=4504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:36.977458 sshd[4504]: Accepted publickey for core from 10.0.0.1 port 57744 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:25:36.977000 audit[4504]: CRED_ACQ pid=4504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:36.977000 audit[4504]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff84804b80 a2=3 a3=7f0935655480 items=0 ppid=1 pid=4504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:36.977000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:36.978634 sshd[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:36.982137 systemd-logind[1278]: New session 17 of user core. Jun 25 16:25:36.989968 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:25:36.992000 audit[4504]: USER_START pid=4504 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:36.994000 audit[4506]: CRED_ACQ pid=4506 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:37.137485 systemd-networkd[1116]: cali13f5e4066e1: Link UP Jun 25 16:25:37.182274 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali13f5e4066e1: link becomes ready Jun 25 16:25:37.183241 systemd-networkd[1116]: cali13f5e4066e1: Gained carrier Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.087 [INFO][4399] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bw7kv-eth0 csi-node-driver- calico-system a82ce7d0-b43c-4d81-ae9f-10974dd66ff7 926 0 2024-06-25 16:24:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-bw7kv eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali13f5e4066e1 [] []}} ContainerID="7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" Namespace="calico-system" Pod="csi-node-driver-bw7kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bw7kv-" Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.087 [INFO][4399] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" Namespace="calico-system" Pod="csi-node-driver-bw7kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.110 [INFO][4413] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" HandleID="k8s-pod-network.7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" Workload="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.287 [INFO][4413] ipam_plugin.go 264: Auto assigning IP ContainerID="7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" HandleID="k8s-pod-network.7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" Workload="localhost-k8s-csi--node--driver--bw7kv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e6990), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bw7kv", "timestamp":"2024-06-25 16:25:36.110886694 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.287 [INFO][4413] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.287 [INFO][4413] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.287 [INFO][4413] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.292 [INFO][4413] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" host="localhost" Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.344 [INFO][4413] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.579 [INFO][4413] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.838 [INFO][4413] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.927 [INFO][4413] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.927 [INFO][4413] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" host="localhost" Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.929 [INFO][4413] ipam.go 1685: Creating new handle: k8s-pod-network.7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:36.933 [INFO][4413] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" host="localhost" Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:37.132 [INFO][4413] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" host="localhost" Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:37.133 [INFO][4413] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" host="localhost" Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:37.133 [INFO][4413] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:37.187325 containerd[1293]: 2024-06-25 16:25:37.133 [INFO][4413] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" HandleID="k8s-pod-network.7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" Workload="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:25:37.188122 containerd[1293]: 2024-06-25 16:25:37.135 [INFO][4399] k8s.go 386: Populated endpoint ContainerID="7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" Namespace="calico-system" Pod="csi-node-driver-bw7kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bw7kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bw7kv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bw7kv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali13f5e4066e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:37.188122 containerd[1293]: 2024-06-25 16:25:37.135 [INFO][4399] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" Namespace="calico-system" Pod="csi-node-driver-bw7kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:25:37.188122 containerd[1293]: 2024-06-25 16:25:37.135 [INFO][4399] dataplane_linux.go 68: Setting the host side veth name to cali13f5e4066e1 ContainerID="7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" Namespace="calico-system" Pod="csi-node-driver-bw7kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:25:37.188122 containerd[1293]: 2024-06-25 16:25:37.136 [INFO][4399] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" Namespace="calico-system" Pod="csi-node-driver-bw7kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:25:37.188122 containerd[1293]: 2024-06-25 16:25:37.136 [INFO][4399] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" Namespace="calico-system" Pod="csi-node-driver-bw7kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bw7kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bw7kv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a", Pod:"csi-node-driver-bw7kv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali13f5e4066e1", MAC:"12:66:3d:5b:56:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:37.188122 containerd[1293]: 2024-06-25 16:25:37.183 [INFO][4399] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a" Namespace="calico-system" Pod="csi-node-driver-bw7kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:25:37.198000 audit[4522]: NETFILTER_CFG table=filter:111 family=2 entries=42 op=nft_register_chain pid=4522 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:37.198000 audit[4522]: SYSCALL arch=c000003e syscall=46 success=yes exit=21016 a0=3 a1=7ffe87c51340 a2=0 a3=7ffe87c5132c items=0 ppid=3935 pid=4522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.198000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:37.407291 kubelet[2308]: E0625 16:25:37.407170 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:37.427297 containerd[1293]: time="2024-06-25T16:25:37.427197401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:37.427297 containerd[1293]: time="2024-06-25T16:25:37.427274999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:37.427676 containerd[1293]: time="2024-06-25T16:25:37.427641826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:37.427676 containerd[1293]: time="2024-06-25T16:25:37.427663227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:37.448967 systemd[1]: Started cri-containerd-7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a.scope - libcontainer container 7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a. Jun 25 16:25:37.455000 audit: BPF prog-id=166 op=LOAD Jun 25 16:25:37.455000 audit: BPF prog-id=167 op=LOAD Jun 25 16:25:37.455000 audit[4546]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4537 pid=4546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343063396537316134616364303765646264653539353765393563 Jun 25 16:25:37.455000 audit: BPF prog-id=168 op=LOAD Jun 25 16:25:37.455000 audit[4546]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4537 pid=4546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343063396537316134616364303765646264653539353765393563 Jun 25 16:25:37.455000 audit: BPF prog-id=168 op=UNLOAD Jun 25 16:25:37.455000 audit: BPF prog-id=167 op=UNLOAD Jun 25 16:25:37.455000 audit: BPF prog-id=169 op=LOAD Jun 25 16:25:37.455000 audit[4546]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4537 pid=4546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343063396537316134616364303765646264653539353765393563 Jun 25 16:25:37.457484 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:25:37.473611 containerd[1293]: time="2024-06-25T16:25:37.473559706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bw7kv,Uid:a82ce7d0-b43c-4d81-ae9f-10974dd66ff7,Namespace:calico-system,Attempt:1,} returns sandbox id \"7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a\"" Jun 25 16:25:37.519884 sshd[4504]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:37.520000 audit[4504]: USER_END pid=4504 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:37.520000 audit[4504]: CRED_DISP pid=4504 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:37.529176 systemd[1]: sshd@16-10.0.0.104:22-10.0.0.1:57744.service: Deactivated successfully. Jun 25 16:25:37.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.104:22-10.0.0.1:57744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:37.529801 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:25:37.530302 systemd-logind[1278]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:25:37.531556 systemd[1]: Started sshd@17-10.0.0.104:22-10.0.0.1:57752.service - OpenSSH per-connection server daemon (10.0.0.1:57752). Jun 25 16:25:37.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.104:22-10.0.0.1:57752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:37.532304 systemd-logind[1278]: Removed session 17. Jun 25 16:25:37.564000 audit[4572]: USER_ACCT pid=4572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:37.566057 sshd[4572]: Accepted publickey for core from 10.0.0.1 port 57752 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:25:37.566000 audit[4572]: CRED_ACQ pid=4572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:37.566000 audit[4572]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff03aae6f0 a2=3 a3=7fb8c5fae480 items=0 ppid=1 pid=4572 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.566000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:37.567850 sshd[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:37.572953 systemd-logind[1278]: New session 18 of user core. Jun 25 16:25:37.581082 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:25:37.584000 audit[4572]: USER_START pid=4572 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:37.586000 audit[4574]: CRED_ACQ pid=4574 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:37.840493 sshd[4572]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:37.840000 audit[4572]: USER_END pid=4572 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:37.841000 audit[4572]: CRED_DISP pid=4572 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:37.843153 systemd[1]: sshd@17-10.0.0.104:22-10.0.0.1:57752.service: Deactivated successfully. Jun 25 16:25:37.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.104:22-10.0.0.1:57752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:37.844043 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:25:37.844670 systemd-logind[1278]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:25:37.845674 systemd-logind[1278]: Removed session 18. Jun 25 16:25:38.234018 systemd-networkd[1116]: calie89ac64ba5c: Gained IPv6LL Jun 25 16:25:38.409988 kubelet[2308]: E0625 16:25:38.409951 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:39.130998 systemd-networkd[1116]: cali13f5e4066e1: Gained IPv6LL Jun 25 16:25:39.734152 kubelet[2308]: E0625 16:25:39.734115 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:41.390804 containerd[1293]: time="2024-06-25T16:25:41.390723045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:41.544905 containerd[1293]: time="2024-06-25T16:25:41.544775444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:25:41.679334 containerd[1293]: time="2024-06-25T16:25:41.679164317Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:41.734618 kubelet[2308]: E0625 16:25:41.734562 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:41.823441 containerd[1293]: time="2024-06-25T16:25:41.823359826Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:41.946855 containerd[1293]: time="2024-06-25T16:25:41.946690658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:41.947756 containerd[1293]: time="2024-06-25T16:25:41.947712058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 5.282007921s" Jun 25 16:25:41.947839 containerd[1293]: time="2024-06-25T16:25:41.947766331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:25:41.948968 containerd[1293]: time="2024-06-25T16:25:41.948853596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:25:42.000098 containerd[1293]: time="2024-06-25T16:25:42.000057283Z" level=info msg="CreateContainer within sandbox \"6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:25:42.850498 systemd[1]: Started sshd@18-10.0.0.104:22-10.0.0.1:57756.service - OpenSSH per-connection server daemon (10.0.0.1:57756). Jun 25 16:25:42.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.104:22-10.0.0.1:57756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:42.874540 kernel: kauditd_printk_skb: 65 callbacks suppressed Jun 25 16:25:42.874683 kernel: audit: type=1130 audit(1719332742.849:680): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.104:22-10.0.0.1:57756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:43.022000 audit[4604]: USER_ACCT pid=4604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:43.024171 sshd[4604]: Accepted publickey for core from 10.0.0.1 port 57756 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:25:43.026277 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:43.024000 audit[4604]: CRED_ACQ pid=4604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:43.030619 systemd-logind[1278]: New session 19 of user core. Jun 25 16:25:43.031433 kernel: audit: type=1101 audit(1719332743.022:681): pid=4604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:43.031491 kernel: audit: type=1103 audit(1719332743.024:682): pid=4604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:43.031518 kernel: audit: type=1006 audit(1719332743.024:683): pid=4604 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jun 25 16:25:43.024000 audit[4604]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8b73fc40 a2=3 a3=7feaaec47480 items=0 ppid=1 pid=4604 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.125315 kernel: audit: type=1300 audit(1719332743.024:683): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8b73fc40 a2=3 a3=7feaaec47480 items=0 ppid=1 pid=4604 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.125444 kernel: audit: type=1327 audit(1719332743.024:683): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:43.024000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:43.128174 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:25:43.132000 audit[4604]: USER_START pid=4604 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:43.134000 audit[4606]: CRED_ACQ pid=4606 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:43.219666 kernel: audit: type=1105 audit(1719332743.132:684): pid=4604 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:43.219887 kernel: audit: type=1103 audit(1719332743.134:685): pid=4606 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:43.710665 containerd[1293]: time="2024-06-25T16:25:43.710614647Z" level=info msg="CreateContainer within sandbox \"6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"311b79d3e7948322cdd574241099f61faf4238388b8e6a7e55da0a671a572758\"" Jun 25 16:25:43.711794 containerd[1293]: time="2024-06-25T16:25:43.711760562Z" level=info msg="StartContainer for \"311b79d3e7948322cdd574241099f61faf4238388b8e6a7e55da0a671a572758\"" Jun 25 16:25:43.747069 systemd[1]: Started cri-containerd-311b79d3e7948322cdd574241099f61faf4238388b8e6a7e55da0a671a572758.scope - libcontainer container 311b79d3e7948322cdd574241099f61faf4238388b8e6a7e55da0a671a572758. Jun 25 16:25:43.758000 audit: BPF prog-id=170 op=LOAD Jun 25 16:25:43.760000 audit: BPF prog-id=171 op=LOAD Jun 25 16:25:43.776466 kernel: audit: type=1334 audit(1719332743.758:686): prog-id=170 op=LOAD Jun 25 16:25:43.776538 kernel: audit: type=1334 audit(1719332743.760:687): prog-id=171 op=LOAD Jun 25 16:25:43.760000 audit[4645]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4453 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331316237396433653739343833323263646435373432343130393966 Jun 25 16:25:43.760000 audit: BPF prog-id=172 op=LOAD Jun 25 16:25:43.760000 audit[4645]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4453 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331316237396433653739343833323263646435373432343130393966 Jun 25 16:25:43.760000 audit: BPF prog-id=172 op=UNLOAD Jun 25 16:25:43.760000 audit: BPF prog-id=171 op=UNLOAD Jun 25 16:25:43.760000 audit: BPF prog-id=173 op=LOAD Jun 25 16:25:43.760000 audit[4645]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4453 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331316237396433653739343833323263646435373432343130393966 Jun 25 16:25:44.021208 sshd[4604]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:44.021000 audit[4604]: USER_END pid=4604 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:44.021000 audit[4604]: CRED_DISP pid=4604 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:44.024003 systemd[1]: sshd@18-10.0.0.104:22-10.0.0.1:57756.service: Deactivated successfully. Jun 25 16:25:44.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.104:22-10.0.0.1:57756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:44.025038 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:25:44.025519 systemd-logind[1278]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:25:44.026307 systemd-logind[1278]: Removed session 19. Jun 25 16:25:44.062809 containerd[1293]: time="2024-06-25T16:25:44.062719929Z" level=info msg="StartContainer for \"311b79d3e7948322cdd574241099f61faf4238388b8e6a7e55da0a671a572758\" returns successfully" Jun 25 16:25:46.513807 kubelet[2308]: I0625 16:25:46.513749 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-678c89559-dn457" podStartSLOduration=66.230495931 podStartE2EDuration="1m11.513730774s" podCreationTimestamp="2024-06-25 16:24:35 +0000 UTC" firstStartedPulling="2024-06-25 16:25:36.665397353 +0000 UTC m=+84.010057386" lastFinishedPulling="2024-06-25 16:25:41.948632186 +0000 UTC m=+89.293292229" observedRunningTime="2024-06-25 16:25:44.609511493 +0000 UTC m=+91.954171526" watchObservedRunningTime="2024-06-25 16:25:46.513730774 +0000 UTC m=+93.858390807" Jun 25 16:25:47.734742 kubelet[2308]: E0625 16:25:47.734697 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:49.036041 systemd[1]: Started sshd@19-10.0.0.104:22-10.0.0.1:40870.service - OpenSSH per-connection server daemon (10.0.0.1:40870). Jun 25 16:25:49.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.104:22-10.0.0.1:40870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:49.089164 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:25:49.089337 kernel: audit: type=1130 audit(1719332749.035:695): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.104:22-10.0.0.1:40870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:49.118000 audit[4722]: USER_ACCT pid=4722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:49.119139 sshd[4722]: Accepted publickey for core from 10.0.0.1 port 40870 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:25:49.120500 sshd[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:49.123765 systemd-logind[1278]: New session 20 of user core. Jun 25 16:25:49.119000 audit[4722]: CRED_ACQ pid=4722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:49.213343 kernel: audit: type=1101 audit(1719332749.118:696): pid=4722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:49.213392 kernel: audit: type=1103 audit(1719332749.119:697): pid=4722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:49.213426 kernel: audit: type=1006 audit(1719332749.119:698): pid=4722 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jun 25 16:25:49.119000 audit[4722]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfaa3cb30 a2=3 a3=7f63cc580480 items=0 ppid=1 pid=4722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:49.301002 kernel: audit: type=1300 audit(1719332749.119:698): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfaa3cb30 a2=3 a3=7f63cc580480 items=0 ppid=1 pid=4722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:49.119000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:49.302141 kernel: audit: type=1327 audit(1719332749.119:698): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:49.305118 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:25:49.308000 audit[4722]: USER_START pid=4722 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:49.310000 audit[4724]: CRED_ACQ pid=4724 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:49.315523 kernel: audit: type=1105 audit(1719332749.308:699): pid=4722 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:49.315614 kernel: audit: type=1103 audit(1719332749.310:700): pid=4724 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:49.988241 sshd[4722]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:49.988000 audit[4722]: USER_END pid=4722 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:49.991104 systemd[1]: sshd@19-10.0.0.104:22-10.0.0.1:40870.service: Deactivated successfully. Jun 25 16:25:49.991873 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:25:49.988000 audit[4722]: CRED_DISP pid=4722 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:49.994758 systemd-logind[1278]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:25:49.995919 systemd-logind[1278]: Removed session 20. Jun 25 16:25:49.996119 kernel: audit: type=1106 audit(1719332749.988:701): pid=4722 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:49.996175 kernel: audit: type=1104 audit(1719332749.988:702): pid=4722 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:49.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.104:22-10.0.0.1:40870 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:50.891398 containerd[1293]: time="2024-06-25T16:25:50.891318749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:50.973327 containerd[1293]: time="2024-06-25T16:25:50.973225834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:25:51.040343 containerd[1293]: time="2024-06-25T16:25:51.040260169Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:51.126473 containerd[1293]: time="2024-06-25T16:25:51.126386081Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:51.227402 containerd[1293]: time="2024-06-25T16:25:51.227236945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:51.228004 containerd[1293]: time="2024-06-25T16:25:51.227950827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 9.279063829s" Jun 25 16:25:51.228086 containerd[1293]: time="2024-06-25T16:25:51.228005251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:25:51.230601 containerd[1293]: time="2024-06-25T16:25:51.230568166Z" level=info msg="CreateContainer within sandbox \"7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:25:51.802056 containerd[1293]: time="2024-06-25T16:25:51.801970491Z" level=info msg="CreateContainer within sandbox \"7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"61a794e6c17e09047129447cbbe49a48f9ad7b9ab5106a8435dc535a07ec7804\"" Jun 25 16:25:51.803104 containerd[1293]: time="2024-06-25T16:25:51.803025239Z" level=info msg="StartContainer for \"61a794e6c17e09047129447cbbe49a48f9ad7b9ab5106a8435dc535a07ec7804\"" Jun 25 16:25:51.836184 systemd[1]: run-containerd-runc-k8s.io-61a794e6c17e09047129447cbbe49a48f9ad7b9ab5106a8435dc535a07ec7804-runc.cNNOYx.mount: Deactivated successfully. Jun 25 16:25:51.843133 systemd[1]: Started cri-containerd-61a794e6c17e09047129447cbbe49a48f9ad7b9ab5106a8435dc535a07ec7804.scope - libcontainer container 61a794e6c17e09047129447cbbe49a48f9ad7b9ab5106a8435dc535a07ec7804. Jun 25 16:25:51.857000 audit: BPF prog-id=174 op=LOAD Jun 25 16:25:51.857000 audit[4754]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4537 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:51.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631613739346536633137653039303437313239343437636262653439 Jun 25 16:25:51.857000 audit: BPF prog-id=175 op=LOAD Jun 25 16:25:51.857000 audit[4754]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4537 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:51.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631613739346536633137653039303437313239343437636262653439 Jun 25 16:25:51.857000 audit: BPF prog-id=175 op=UNLOAD Jun 25 16:25:51.857000 audit: BPF prog-id=174 op=UNLOAD Jun 25 16:25:51.857000 audit: BPF prog-id=176 op=LOAD Jun 25 16:25:51.857000 audit[4754]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4537 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:51.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631613739346536633137653039303437313239343437636262653439 Jun 25 16:25:52.735052 kubelet[2308]: E0625 16:25:52.735019 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:25:53.152678 containerd[1293]: time="2024-06-25T16:25:53.152610679Z" level=info msg="StartContainer for \"61a794e6c17e09047129447cbbe49a48f9ad7b9ab5106a8435dc535a07ec7804\" returns successfully" Jun 25 16:25:53.153752 containerd[1293]: time="2024-06-25T16:25:53.153706675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:25:54.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.104:22-10.0.0.1:40872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:54.998436 systemd[1]: Started sshd@20-10.0.0.104:22-10.0.0.1:40872.service - OpenSSH per-connection server daemon (10.0.0.1:40872). Jun 25 16:25:55.018810 kernel: kauditd_printk_skb: 12 callbacks suppressed Jun 25 16:25:55.019034 kernel: audit: type=1130 audit(1719332754.997:709): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.104:22-10.0.0.1:40872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:55.863000 audit[4809]: USER_ACCT pid=4809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:55.864711 sshd[4809]: Accepted publickey for core from 10.0.0.1 port 40872 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:25:55.866587 sshd[4809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:55.864000 audit[4809]: CRED_ACQ pid=4809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:55.871277 systemd-logind[1278]: New session 21 of user core. Jun 25 16:25:55.872373 kernel: audit: type=1101 audit(1719332755.863:710): pid=4809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:55.872477 kernel: audit: type=1103 audit(1719332755.864:711): pid=4809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:55.872496 kernel: audit: type=1006 audit(1719332755.864:712): pid=4809 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jun 25 16:25:55.864000 audit[4809]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc0194840 a2=3 a3=7f2bf28e0480 items=0 ppid=1 pid=4809 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:55.878104 kernel: audit: type=1300 audit(1719332755.864:712): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc0194840 a2=3 a3=7f2bf28e0480 items=0 ppid=1 pid=4809 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:55.878158 kernel: audit: type=1327 audit(1719332755.864:712): proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:55.864000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:55.886091 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:25:55.889000 audit[4809]: USER_START pid=4809 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:55.891000 audit[4812]: CRED_ACQ pid=4812 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:55.951447 kernel: audit: type=1105 audit(1719332755.889:713): pid=4809 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:55.951518 kernel: audit: type=1103 audit(1719332755.891:714): pid=4812 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:56.390178 sshd[4809]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:56.470000 audit[4809]: USER_END pid=4809 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:56.473185 systemd[1]: sshd@20-10.0.0.104:22-10.0.0.1:40872.service: Deactivated successfully. Jun 25 16:25:56.474082 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:25:56.474791 systemd-logind[1278]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:25:56.470000 audit[4809]: CRED_DISP pid=4809 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:56.475601 systemd-logind[1278]: Removed session 21. Jun 25 16:25:56.478089 kernel: audit: type=1106 audit(1719332756.470:715): pid=4809 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:56.478133 kernel: audit: type=1104 audit(1719332756.470:716): pid=4809 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:25:56.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.104:22-10.0.0.1:40872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.104:22-10.0.0.1:57536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.402878 systemd[1]: Started sshd@21-10.0.0.104:22-10.0.0.1:57536.service - OpenSSH per-connection server daemon (10.0.0.1:57536). Jun 25 16:26:01.422336 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:26:01.422487 kernel: audit: type=1130 audit(1719332761.402:718): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.104:22-10.0.0.1:57536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.449000 audit[4854]: USER_ACCT pid=4854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:01.450488 sshd[4854]: Accepted publickey for core from 10.0.0.1 port 57536 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:01.450000 audit[4854]: CRED_ACQ pid=4854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:01.488714 kernel: audit: type=1101 audit(1719332761.449:719): pid=4854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:01.488782 kernel: audit: type=1103 audit(1719332761.450:720): pid=4854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:01.488818 kernel: audit: type=1006 audit(1719332761.450:721): pid=4854 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 16:26:01.490717 kernel: audit: type=1300 audit(1719332761.450:721): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff62a3d50 a2=3 a3=7faaa42ac480 items=0 ppid=1 pid=4854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:01.450000 audit[4854]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff62a3d50 a2=3 a3=7faaa42ac480 items=0 ppid=1 pid=4854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:01.494253 kernel: audit: type=1327 audit(1719332761.450:721): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:01.450000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:01.649030 sshd[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:01.653392 systemd-logind[1278]: New session 22 of user core. Jun 25 16:26:01.664030 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:26:01.667000 audit[4854]: USER_START pid=4854 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:01.669000 audit[4856]: CRED_ACQ pid=4856 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:01.676110 kernel: audit: type=1105 audit(1719332761.667:722): pid=4854 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:01.676161 kernel: audit: type=1103 audit(1719332761.669:723): pid=4856 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:01.839489 sshd[4854]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:01.839000 audit[4854]: USER_END pid=4854 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:01.842284 systemd[1]: sshd@21-10.0.0.104:22-10.0.0.1:57536.service: Deactivated successfully. Jun 25 16:26:01.843090 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:26:01.843650 systemd-logind[1278]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:26:01.873471 kernel: audit: type=1106 audit(1719332761.839:724): pid=4854 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:01.873527 kernel: audit: type=1104 audit(1719332761.839:725): pid=4854 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:01.839000 audit[4854]: CRED_DISP pid=4854 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:01.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.104:22-10.0.0.1:57536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:01.844488 systemd-logind[1278]: Removed session 22. Jun 25 16:26:06.850344 systemd[1]: Started sshd@22-10.0.0.104:22-10.0.0.1:54360.service - OpenSSH per-connection server daemon (10.0.0.1:54360). Jun 25 16:26:06.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.104:22-10.0.0.1:54360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.145347 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:26:07.145488 kernel: audit: type=1130 audit(1719332766.848:727): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.104:22-10.0.0.1:54360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:07.186000 audit[4893]: USER_ACCT pid=4893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:07.190056 sshd[4893]: Accepted publickey for core from 10.0.0.1 port 54360 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:07.190531 sshd[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:07.188000 audit[4893]: CRED_ACQ pid=4893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:07.196333 kernel: audit: type=1101 audit(1719332767.186:728): pid=4893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:07.196461 kernel: audit: type=1103 audit(1719332767.188:729): pid=4893 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:07.196492 kernel: audit: type=1006 audit(1719332767.188:730): pid=4893 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 16:26:07.198484 kernel: audit: type=1300 audit(1719332767.188:730): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd360a84a0 a2=3 a3=7faa8e560480 items=0 ppid=1 pid=4893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:07.188000 audit[4893]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd360a84a0 a2=3 a3=7faa8e560480 items=0 ppid=1 pid=4893 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:07.198125 systemd-logind[1278]: New session 23 of user core. Jun 25 16:26:07.188000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:07.202514 kernel: audit: type=1327 audit(1719332767.188:730): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:07.209142 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:26:07.212000 audit[4893]: USER_START pid=4893 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:07.214000 audit[4895]: CRED_ACQ pid=4895 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:07.220751 kernel: audit: type=1105 audit(1719332767.212:731): pid=4893 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:07.220804 kernel: audit: type=1103 audit(1719332767.214:732): pid=4895 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:08.026262 containerd[1293]: time="2024-06-25T16:26:08.026190599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:08.526510 containerd[1293]: time="2024-06-25T16:26:08.526410843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:26:08.536867 sshd[4893]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:08.536000 audit[4893]: USER_END pid=4893 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:08.539774 systemd[1]: sshd@22-10.0.0.104:22-10.0.0.1:54360.service: Deactivated successfully. Jun 25 16:26:08.540444 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:26:08.541329 systemd-logind[1278]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:26:08.542029 systemd-logind[1278]: Removed session 23. Jun 25 16:26:08.536000 audit[4893]: CRED_DISP pid=4893 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:08.982770 kernel: audit: type=1106 audit(1719332768.536:733): pid=4893 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:08.982817 kernel: audit: type=1104 audit(1719332768.536:734): pid=4893 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:08.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.104:22-10.0.0.1:54360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:09.079350 containerd[1293]: time="2024-06-25T16:26:09.079276209Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:09.581255 containerd[1293]: time="2024-06-25T16:26:09.581175354Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:09.643000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:09.643000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:09.643000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00321aca0 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:26:09.643000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:09.643000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002d32cf0 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:26:09.643000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:09.744000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:09.744000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:09.744000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=76 a1=c004c94640 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:26:09.744000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=75 a1=c005daf260 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:26:09.744000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:09.744000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:09.744000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:09.744000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=75 a1=c005daf350 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:26:09.744000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:09.745000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:09.745000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=76 a1=c007321b60 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:26:09.745000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:09.745000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:09.745000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=76 a1=c004c94680 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:26:09.745000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:09.745000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:09.745000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=76 a1=c007321bc0 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:26:09.745000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:26:09.837381 containerd[1293]: time="2024-06-25T16:26:09.837244165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:09.838415 containerd[1293]: time="2024-06-25T16:26:09.838356816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 16.684600637s" Jun 25 16:26:09.838415 containerd[1293]: time="2024-06-25T16:26:09.838416348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:26:09.840607 containerd[1293]: time="2024-06-25T16:26:09.840566739Z" level=info msg="CreateContainer within sandbox \"7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:26:12.467918 containerd[1293]: time="2024-06-25T16:26:12.467790605Z" level=info msg="CreateContainer within sandbox \"7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"878d2b71f374975e9e21edacf010d3cbebe81dbc2b1e455a6fd1f013dc3238c6\"" Jun 25 16:26:12.468515 containerd[1293]: time="2024-06-25T16:26:12.468474566Z" level=info msg="StartContainer for \"878d2b71f374975e9e21edacf010d3cbebe81dbc2b1e455a6fd1f013dc3238c6\"" Jun 25 16:26:12.503070 systemd[1]: Started cri-containerd-878d2b71f374975e9e21edacf010d3cbebe81dbc2b1e455a6fd1f013dc3238c6.scope - libcontainer container 878d2b71f374975e9e21edacf010d3cbebe81dbc2b1e455a6fd1f013dc3238c6. Jun 25 16:26:12.515000 audit: BPF prog-id=177 op=LOAD Jun 25 16:26:12.519899 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:26:12.520028 kernel: audit: type=1334 audit(1719332772.515:744): prog-id=177 op=LOAD Jun 25 16:26:12.520055 kernel: audit: type=1300 audit(1719332772.515:744): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4537 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:12.515000 audit[4923]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4537 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:12.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837386432623731663337343937356539653231656461636630313064 Jun 25 16:26:12.516000 audit: BPF prog-id=178 op=LOAD Jun 25 16:26:12.531420 kernel: audit: type=1327 audit(1719332772.515:744): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837386432623731663337343937356539653231656461636630313064 Jun 25 16:26:12.531466 kernel: audit: type=1334 audit(1719332772.516:745): prog-id=178 op=LOAD Jun 25 16:26:12.531498 kernel: audit: type=1300 audit(1719332772.516:745): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4537 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:12.516000 audit[4923]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4537 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:12.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837386432623731663337343937356539653231656461636630313064 Jun 25 16:26:12.570192 kernel: audit: type=1327 audit(1719332772.516:745): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837386432623731663337343937356539653231656461636630313064 Jun 25 16:26:12.570349 kernel: audit: type=1334 audit(1719332772.516:746): prog-id=178 op=UNLOAD Jun 25 16:26:12.516000 audit: BPF prog-id=178 op=UNLOAD Jun 25 16:26:12.516000 audit: BPF prog-id=177 op=UNLOAD Jun 25 16:26:12.572165 kernel: audit: type=1334 audit(1719332772.516:747): prog-id=177 op=UNLOAD Jun 25 16:26:12.572214 kernel: audit: type=1334 audit(1719332772.516:748): prog-id=179 op=LOAD Jun 25 16:26:12.516000 audit: BPF prog-id=179 op=LOAD Jun 25 16:26:12.656552 kernel: audit: type=1300 audit(1719332772.516:748): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4537 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:12.516000 audit[4923]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4537 pid=4923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:12.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837386432623731663337343937356539653231656461636630313064 Jun 25 16:26:13.046819 kubelet[2308]: I0625 16:26:13.046757 2308 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:26:13.046819 kubelet[2308]: I0625 16:26:13.046809 2308 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:26:13.057634 containerd[1293]: time="2024-06-25T16:26:13.057577370Z" level=info msg="StartContainer for \"878d2b71f374975e9e21edacf010d3cbebe81dbc2b1e455a6fd1f013dc3238c6\" returns successfully" Jun 25 16:26:13.253243 kubelet[2308]: I0625 16:26:13.253160 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bw7kv" podStartSLOduration=65.890348091 podStartE2EDuration="1m38.253128935s" podCreationTimestamp="2024-06-25 16:24:35 +0000 UTC" firstStartedPulling="2024-06-25 16:25:37.476487443 +0000 UTC m=+84.821147476" lastFinishedPulling="2024-06-25 16:26:09.839268287 +0000 UTC m=+117.183928320" observedRunningTime="2024-06-25 16:26:13.252655411 +0000 UTC m=+120.597315454" watchObservedRunningTime="2024-06-25 16:26:13.253128935 +0000 UTC m=+120.597788968" Jun 25 16:26:13.547782 systemd[1]: Started sshd@23-10.0.0.104:22-10.0.0.1:54362.service - OpenSSH per-connection server daemon (10.0.0.1:54362). Jun 25 16:26:13.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.104:22-10.0.0.1:54362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.585000 audit[4956]: USER_ACCT pid=4956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:13.587396 sshd[4956]: Accepted publickey for core from 10.0.0.1 port 54362 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:13.586000 audit[4956]: CRED_ACQ pid=4956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:13.586000 audit[4956]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd631d4d0 a2=3 a3=7f07da329480 items=0 ppid=1 pid=4956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:13.586000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:13.589005 sshd[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:13.597422 systemd-logind[1278]: New session 24 of user core. Jun 25 16:26:13.603150 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:26:13.607000 audit[4956]: USER_START pid=4956 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:13.609000 audit[4958]: CRED_ACQ pid=4958 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:13.906157 sshd[4956]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:13.905000 audit[4956]: USER_END pid=4956 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:13.906000 audit[4956]: CRED_DISP pid=4956 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:13.909913 systemd[1]: sshd@23-10.0.0.104:22-10.0.0.1:54362.service: Deactivated successfully. Jun 25 16:26:13.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.104:22-10.0.0.1:54362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.910675 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:26:13.911445 systemd-logind[1278]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:26:13.912367 systemd-logind[1278]: Removed session 24. Jun 25 16:26:14.375000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:14.375000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c003688a20 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:26:14.375000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:14.375000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:14.375000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c003688d40 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:26:14.375000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:14.375000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:14.375000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c003688d60 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:26:14.375000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:14.375000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:26:14.375000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00373ee40 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:26:14.375000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:26:15.429606 containerd[1293]: time="2024-06-25T16:26:15.429553980Z" level=info msg="StopPodSandbox for \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\"" Jun 25 16:26:15.505387 containerd[1293]: 2024-06-25 16:26:15.463 [WARNING][4988] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0", GenerateName:"calico-kube-controllers-678c89559-", Namespace:"calico-system", SelfLink:"", UID:"ef011844-7458-4dc5-b4b3-48140b3ba006", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"678c89559", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89", Pod:"calico-kube-controllers-678c89559-dn457", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie89ac64ba5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:26:15.505387 containerd[1293]: 2024-06-25 16:26:15.463 [INFO][4988] k8s.go 608: Cleaning up netns ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:26:15.505387 containerd[1293]: 2024-06-25 16:26:15.464 [INFO][4988] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" iface="eth0" netns="" Jun 25 16:26:15.505387 containerd[1293]: 2024-06-25 16:26:15.464 [INFO][4988] k8s.go 615: Releasing IP address(es) ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:26:15.505387 containerd[1293]: 2024-06-25 16:26:15.464 [INFO][4988] utils.go 188: Calico CNI releasing IP address ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:26:15.505387 containerd[1293]: 2024-06-25 16:26:15.496 [INFO][4995] ipam_plugin.go 411: Releasing address using handleID ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" HandleID="k8s-pod-network.ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Workload="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:26:15.505387 containerd[1293]: 2024-06-25 16:26:15.496 [INFO][4995] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:26:15.505387 containerd[1293]: 2024-06-25 16:26:15.496 [INFO][4995] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:26:15.505387 containerd[1293]: 2024-06-25 16:26:15.500 [WARNING][4995] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" HandleID="k8s-pod-network.ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Workload="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:26:15.505387 containerd[1293]: 2024-06-25 16:26:15.500 [INFO][4995] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" HandleID="k8s-pod-network.ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Workload="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:26:15.505387 containerd[1293]: 2024-06-25 16:26:15.502 [INFO][4995] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:26:15.505387 containerd[1293]: 2024-06-25 16:26:15.503 [INFO][4988] k8s.go 621: Teardown processing complete. ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:26:15.505949 containerd[1293]: time="2024-06-25T16:26:15.505889201Z" level=info msg="TearDown network for sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\" successfully" Jun 25 16:26:15.505949 containerd[1293]: time="2024-06-25T16:26:15.505940849Z" level=info msg="StopPodSandbox for \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\" returns successfully" Jun 25 16:26:15.506539 containerd[1293]: time="2024-06-25T16:26:15.506480357Z" level=info msg="RemovePodSandbox for \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\"" Jun 25 16:26:15.506592 containerd[1293]: time="2024-06-25T16:26:15.506544317Z" level=info msg="Forcibly stopping sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\"" Jun 25 16:26:15.578718 containerd[1293]: 2024-06-25 16:26:15.538 [WARNING][5017] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0", GenerateName:"calico-kube-controllers-678c89559-", Namespace:"calico-system", SelfLink:"", UID:"ef011844-7458-4dc5-b4b3-48140b3ba006", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"678c89559", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6703fa5987b2b12e3750633b6907182237277b88929297034ce28d260c9d8a89", Pod:"calico-kube-controllers-678c89559-dn457", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie89ac64ba5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:26:15.578718 containerd[1293]: 2024-06-25 16:26:15.538 [INFO][5017] k8s.go 608: Cleaning up netns ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:26:15.578718 containerd[1293]: 2024-06-25 16:26:15.538 [INFO][5017] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" iface="eth0" netns="" Jun 25 16:26:15.578718 containerd[1293]: 2024-06-25 16:26:15.539 [INFO][5017] k8s.go 615: Releasing IP address(es) ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:26:15.578718 containerd[1293]: 2024-06-25 16:26:15.539 [INFO][5017] utils.go 188: Calico CNI releasing IP address ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:26:15.578718 containerd[1293]: 2024-06-25 16:26:15.565 [INFO][5025] ipam_plugin.go 411: Releasing address using handleID ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" HandleID="k8s-pod-network.ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Workload="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:26:15.578718 containerd[1293]: 2024-06-25 16:26:15.565 [INFO][5025] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:26:15.578718 containerd[1293]: 2024-06-25 16:26:15.565 [INFO][5025] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:26:15.578718 containerd[1293]: 2024-06-25 16:26:15.572 [WARNING][5025] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" HandleID="k8s-pod-network.ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Workload="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:26:15.578718 containerd[1293]: 2024-06-25 16:26:15.572 [INFO][5025] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" HandleID="k8s-pod-network.ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Workload="localhost-k8s-calico--kube--controllers--678c89559--dn457-eth0" Jun 25 16:26:15.578718 containerd[1293]: 2024-06-25 16:26:15.574 [INFO][5025] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:26:15.578718 containerd[1293]: 2024-06-25 16:26:15.576 [INFO][5017] k8s.go 621: Teardown processing complete. ContainerID="ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61" Jun 25 16:26:15.579283 containerd[1293]: time="2024-06-25T16:26:15.578756117Z" level=info msg="TearDown network for sandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\" successfully" Jun 25 16:26:15.674859 containerd[1293]: time="2024-06-25T16:26:15.674772905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:26:15.675057 containerd[1293]: time="2024-06-25T16:26:15.674922197Z" level=info msg="RemovePodSandbox \"ce62345f69541c35ee28655e1280bbd66d92c820ee74766dea92a9c08282ba61\" returns successfully" Jun 25 16:26:15.675449 containerd[1293]: time="2024-06-25T16:26:15.675413804Z" level=info msg="StopPodSandbox for \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\"" Jun 25 16:26:15.748578 containerd[1293]: 2024-06-25 16:26:15.708 [WARNING][5048] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--27tds-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5419bba1-6081-4f31-bcc8-616bdda728d4", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c", Pod:"coredns-7db6d8ff4d-27tds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8d2492dd75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:26:15.748578 containerd[1293]: 2024-06-25 16:26:15.708 [INFO][5048] k8s.go 608: Cleaning up netns ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:26:15.748578 containerd[1293]: 2024-06-25 16:26:15.708 [INFO][5048] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" iface="eth0" netns="" Jun 25 16:26:15.748578 containerd[1293]: 2024-06-25 16:26:15.708 [INFO][5048] k8s.go 615: Releasing IP address(es) ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:26:15.748578 containerd[1293]: 2024-06-25 16:26:15.708 [INFO][5048] utils.go 188: Calico CNI releasing IP address ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:26:15.748578 containerd[1293]: 2024-06-25 16:26:15.735 [INFO][5055] ipam_plugin.go 411: Releasing address using handleID ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" HandleID="k8s-pod-network.7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Workload="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:26:15.748578 containerd[1293]: 2024-06-25 16:26:15.735 [INFO][5055] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:26:15.748578 containerd[1293]: 2024-06-25 16:26:15.735 [INFO][5055] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:26:15.748578 containerd[1293]: 2024-06-25 16:26:15.741 [WARNING][5055] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" HandleID="k8s-pod-network.7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Workload="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:26:15.748578 containerd[1293]: 2024-06-25 16:26:15.741 [INFO][5055] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" HandleID="k8s-pod-network.7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Workload="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:26:15.748578 containerd[1293]: 2024-06-25 16:26:15.743 [INFO][5055] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:26:15.748578 containerd[1293]: 2024-06-25 16:26:15.744 [INFO][5048] k8s.go 621: Teardown processing complete. ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:26:15.748578 containerd[1293]: time="2024-06-25T16:26:15.747640301Z" level=info msg="TearDown network for sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\" successfully" Jun 25 16:26:15.748578 containerd[1293]: time="2024-06-25T16:26:15.747678464Z" level=info msg="StopPodSandbox for \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\" returns successfully" Jun 25 16:26:15.748578 containerd[1293]: time="2024-06-25T16:26:15.748325144Z" level=info msg="RemovePodSandbox for \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\"" Jun 25 16:26:15.748578 containerd[1293]: time="2024-06-25T16:26:15.748368246Z" level=info msg="Forcibly stopping sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\"" Jun 25 16:26:15.810579 containerd[1293]: 2024-06-25 16:26:15.779 [WARNING][5079] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--27tds-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5419bba1-6081-4f31-bcc8-616bdda728d4", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"906807ebc22426d2e26d6170f8cf281cdd6bea04efc2c666e337522883a1249c", Pod:"coredns-7db6d8ff4d-27tds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8d2492dd75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:26:15.810579 containerd[1293]: 2024-06-25 16:26:15.780 [INFO][5079] k8s.go 608: Cleaning up netns ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:26:15.810579 containerd[1293]: 2024-06-25 16:26:15.780 [INFO][5079] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" iface="eth0" netns="" Jun 25 16:26:15.810579 containerd[1293]: 2024-06-25 16:26:15.780 [INFO][5079] k8s.go 615: Releasing IP address(es) ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:26:15.810579 containerd[1293]: 2024-06-25 16:26:15.780 [INFO][5079] utils.go 188: Calico CNI releasing IP address ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:26:15.810579 containerd[1293]: 2024-06-25 16:26:15.801 [INFO][5086] ipam_plugin.go 411: Releasing address using handleID ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" HandleID="k8s-pod-network.7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Workload="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:26:15.810579 containerd[1293]: 2024-06-25 16:26:15.801 [INFO][5086] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:26:15.810579 containerd[1293]: 2024-06-25 16:26:15.801 [INFO][5086] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:26:15.810579 containerd[1293]: 2024-06-25 16:26:15.806 [WARNING][5086] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" HandleID="k8s-pod-network.7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Workload="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:26:15.810579 containerd[1293]: 2024-06-25 16:26:15.806 [INFO][5086] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" HandleID="k8s-pod-network.7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Workload="localhost-k8s-coredns--7db6d8ff4d--27tds-eth0" Jun 25 16:26:15.810579 containerd[1293]: 2024-06-25 16:26:15.807 [INFO][5086] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:26:15.810579 containerd[1293]: 2024-06-25 16:26:15.809 [INFO][5079] k8s.go 621: Teardown processing complete. ContainerID="7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d" Jun 25 16:26:15.811109 containerd[1293]: time="2024-06-25T16:26:15.810620168Z" level=info msg="TearDown network for sandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\" successfully" Jun 25 16:26:16.033015 containerd[1293]: time="2024-06-25T16:26:16.032896035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:26:16.033159 containerd[1293]: time="2024-06-25T16:26:16.033020491Z" level=info msg="RemovePodSandbox \"7701ccf64ab204f4810e2efb54f851f80a4cb4bd934bef1773a4333466c7cd5d\" returns successfully" Jun 25 16:26:16.033566 containerd[1293]: time="2024-06-25T16:26:16.033542295Z" level=info msg="StopPodSandbox for \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\"" Jun 25 16:26:16.168328 containerd[1293]: 2024-06-25 16:26:16.137 [WARNING][5108] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bw7kv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a", Pod:"csi-node-driver-bw7kv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali13f5e4066e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:26:16.168328 containerd[1293]: 2024-06-25 16:26:16.138 [INFO][5108] k8s.go 608: Cleaning up netns ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:26:16.168328 containerd[1293]: 2024-06-25 16:26:16.138 [INFO][5108] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" iface="eth0" netns="" Jun 25 16:26:16.168328 containerd[1293]: 2024-06-25 16:26:16.138 [INFO][5108] k8s.go 615: Releasing IP address(es) ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:26:16.168328 containerd[1293]: 2024-06-25 16:26:16.138 [INFO][5108] utils.go 188: Calico CNI releasing IP address ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:26:16.168328 containerd[1293]: 2024-06-25 16:26:16.158 [INFO][5116] ipam_plugin.go 411: Releasing address using handleID ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" HandleID="k8s-pod-network.72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Workload="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:26:16.168328 containerd[1293]: 2024-06-25 16:26:16.158 [INFO][5116] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:26:16.168328 containerd[1293]: 2024-06-25 16:26:16.158 [INFO][5116] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:26:16.168328 containerd[1293]: 2024-06-25 16:26:16.161 [WARNING][5116] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" HandleID="k8s-pod-network.72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Workload="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:26:16.168328 containerd[1293]: 2024-06-25 16:26:16.162 [INFO][5116] ipam_plugin.go 439: Releasing address using workloadID ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" HandleID="k8s-pod-network.72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Workload="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:26:16.168328 containerd[1293]: 2024-06-25 16:26:16.163 [INFO][5116] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:26:16.168328 containerd[1293]: 2024-06-25 16:26:16.167 [INFO][5108] k8s.go 621: Teardown processing complete. ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:26:16.168936 containerd[1293]: time="2024-06-25T16:26:16.168881172Z" level=info msg="TearDown network for sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\" successfully" Jun 25 16:26:16.168936 containerd[1293]: time="2024-06-25T16:26:16.168918411Z" level=info msg="StopPodSandbox for \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\" returns successfully" Jun 25 16:26:16.169427 containerd[1293]: time="2024-06-25T16:26:16.169394189Z" level=info msg="RemovePodSandbox for \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\"" Jun 25 16:26:16.169469 containerd[1293]: time="2024-06-25T16:26:16.169433935Z" level=info msg="Forcibly stopping sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\"" Jun 25 16:26:16.279534 containerd[1293]: 2024-06-25 16:26:16.234 [WARNING][5139] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bw7kv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a82ce7d0-b43c-4d81-ae9f-10974dd66ff7", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7940c9e71a4acd07edbde5957e95ce89cf00c8a04cfee3f15694dc1a6bc7201a", Pod:"csi-node-driver-bw7kv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali13f5e4066e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:26:16.279534 containerd[1293]: 2024-06-25 16:26:16.234 [INFO][5139] k8s.go 608: Cleaning up netns ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:26:16.279534 containerd[1293]: 2024-06-25 16:26:16.234 [INFO][5139] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" iface="eth0" netns="" Jun 25 16:26:16.279534 containerd[1293]: 2024-06-25 16:26:16.234 [INFO][5139] k8s.go 615: Releasing IP address(es) ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:26:16.279534 containerd[1293]: 2024-06-25 16:26:16.234 [INFO][5139] utils.go 188: Calico CNI releasing IP address ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:26:16.279534 containerd[1293]: 2024-06-25 16:26:16.270 [INFO][5147] ipam_plugin.go 411: Releasing address using handleID ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" HandleID="k8s-pod-network.72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Workload="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:26:16.279534 containerd[1293]: 2024-06-25 16:26:16.270 [INFO][5147] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:26:16.279534 containerd[1293]: 2024-06-25 16:26:16.270 [INFO][5147] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:26:16.279534 containerd[1293]: 2024-06-25 16:26:16.275 [WARNING][5147] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" HandleID="k8s-pod-network.72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Workload="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:26:16.279534 containerd[1293]: 2024-06-25 16:26:16.275 [INFO][5147] ipam_plugin.go 439: Releasing address using workloadID ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" HandleID="k8s-pod-network.72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Workload="localhost-k8s-csi--node--driver--bw7kv-eth0" Jun 25 16:26:16.279534 containerd[1293]: 2024-06-25 16:26:16.276 [INFO][5147] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:26:16.279534 containerd[1293]: 2024-06-25 16:26:16.278 [INFO][5139] k8s.go 621: Teardown processing complete. ContainerID="72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11" Jun 25 16:26:16.280032 containerd[1293]: time="2024-06-25T16:26:16.279561052Z" level=info msg="TearDown network for sandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\" successfully" Jun 25 16:26:16.471562 containerd[1293]: time="2024-06-25T16:26:16.471470988Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:26:16.472112 containerd[1293]: time="2024-06-25T16:26:16.471574635Z" level=info msg="RemovePodSandbox \"72215987f5c7a16791303c3e92bb6ffe69193ecbace51ca771d32768a5ca1d11\" returns successfully" Jun 25 16:26:16.472275 containerd[1293]: time="2024-06-25T16:26:16.472199984Z" level=info msg="StopPodSandbox for \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\"" Jun 25 16:26:16.550507 containerd[1293]: 2024-06-25 16:26:16.515 [WARNING][5170] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6b6ad376-38df-49e4-8a9f-dd64acf97dda", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6", Pod:"coredns-7db6d8ff4d-xmnbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf6ba286cdd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:26:16.550507 containerd[1293]: 2024-06-25 16:26:16.515 [INFO][5170] k8s.go 608: Cleaning up netns ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:26:16.550507 containerd[1293]: 2024-06-25 16:26:16.515 [INFO][5170] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" iface="eth0" netns="" Jun 25 16:26:16.550507 containerd[1293]: 2024-06-25 16:26:16.515 [INFO][5170] k8s.go 615: Releasing IP address(es) ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:26:16.550507 containerd[1293]: 2024-06-25 16:26:16.515 [INFO][5170] utils.go 188: Calico CNI releasing IP address ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:26:16.550507 containerd[1293]: 2024-06-25 16:26:16.538 [INFO][5178] ipam_plugin.go 411: Releasing address using handleID ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" HandleID="k8s-pod-network.c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Workload="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:26:16.550507 containerd[1293]: 2024-06-25 16:26:16.538 [INFO][5178] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:26:16.550507 containerd[1293]: 2024-06-25 16:26:16.538 [INFO][5178] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:26:16.550507 containerd[1293]: 2024-06-25 16:26:16.545 [WARNING][5178] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" HandleID="k8s-pod-network.c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Workload="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:26:16.550507 containerd[1293]: 2024-06-25 16:26:16.545 [INFO][5178] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" HandleID="k8s-pod-network.c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Workload="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:26:16.550507 containerd[1293]: 2024-06-25 16:26:16.547 [INFO][5178] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:26:16.550507 containerd[1293]: 2024-06-25 16:26:16.548 [INFO][5170] k8s.go 621: Teardown processing complete. ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:26:16.550507 containerd[1293]: time="2024-06-25T16:26:16.550433083Z" level=info msg="TearDown network for sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\" successfully" Jun 25 16:26:16.550507 containerd[1293]: time="2024-06-25T16:26:16.550474913Z" level=info msg="StopPodSandbox for \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\" returns successfully" Jun 25 16:26:16.551274 containerd[1293]: time="2024-06-25T16:26:16.551004291Z" level=info msg="RemovePodSandbox for \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\"" Jun 25 16:26:16.551274 containerd[1293]: time="2024-06-25T16:26:16.551051651Z" level=info msg="Forcibly stopping sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\"" Jun 25 16:26:16.652703 containerd[1293]: 2024-06-25 16:26:16.616 [WARNING][5201] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6b6ad376-38df-49e4-8a9f-dd64acf97dda", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7078e66d9d9bcc748da59fd60c2444de77b817f79e5bdd05b6fdd8f357268b6", Pod:"coredns-7db6d8ff4d-xmnbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf6ba286cdd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:26:16.652703 containerd[1293]: 2024-06-25 16:26:16.616 [INFO][5201] k8s.go 608: Cleaning up netns ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:26:16.652703 containerd[1293]: 2024-06-25 16:26:16.616 [INFO][5201] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" iface="eth0" netns="" Jun 25 16:26:16.652703 containerd[1293]: 2024-06-25 16:26:16.616 [INFO][5201] k8s.go 615: Releasing IP address(es) ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:26:16.652703 containerd[1293]: 2024-06-25 16:26:16.616 [INFO][5201] utils.go 188: Calico CNI releasing IP address ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:26:16.652703 containerd[1293]: 2024-06-25 16:26:16.642 [INFO][5208] ipam_plugin.go 411: Releasing address using handleID ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" HandleID="k8s-pod-network.c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Workload="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:26:16.652703 containerd[1293]: 2024-06-25 16:26:16.642 [INFO][5208] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:26:16.652703 containerd[1293]: 2024-06-25 16:26:16.642 [INFO][5208] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:26:16.652703 containerd[1293]: 2024-06-25 16:26:16.647 [WARNING][5208] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" HandleID="k8s-pod-network.c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Workload="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:26:16.652703 containerd[1293]: 2024-06-25 16:26:16.647 [INFO][5208] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" HandleID="k8s-pod-network.c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Workload="localhost-k8s-coredns--7db6d8ff4d--xmnbk-eth0" Jun 25 16:26:16.652703 containerd[1293]: 2024-06-25 16:26:16.648 [INFO][5208] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:26:16.652703 containerd[1293]: 2024-06-25 16:26:16.650 [INFO][5201] k8s.go 621: Teardown processing complete. ContainerID="c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89" Jun 25 16:26:16.653258 containerd[1293]: time="2024-06-25T16:26:16.652811549Z" level=info msg="TearDown network for sandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\" successfully" Jun 25 16:26:16.838250 containerd[1293]: time="2024-06-25T16:26:16.838174338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:26:16.838464 containerd[1293]: time="2024-06-25T16:26:16.838268356Z" level=info msg="RemovePodSandbox \"c412212884948f6b6b7c43e4dba619e9e0e0c5f63b9bbd5a47b71133fb548e89\" returns successfully" Jun 25 16:26:18.917368 systemd[1]: Started sshd@24-10.0.0.104:22-10.0.0.1:54830.service - OpenSSH per-connection server daemon (10.0.0.1:54830). Jun 25 16:26:18.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.104:22-10.0.0.1:54830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:18.942739 kernel: kauditd_printk_skb: 24 callbacks suppressed Jun 25 16:26:18.942928 kernel: audit: type=1130 audit(1719332778.916:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.104:22-10.0.0.1:54830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:18.985000 audit[5219]: USER_ACCT pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:18.986405 sshd[5219]: Accepted publickey for core from 10.0.0.1 port 54830 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:18.987517 sshd[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:18.986000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:18.999798 systemd-logind[1278]: New session 25 of user core. Jun 25 16:26:19.025286 kernel: audit: type=1101 audit(1719332778.985:763): pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:19.025552 kernel: audit: type=1103 audit(1719332778.986:764): pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:19.025600 kernel: audit: type=1006 audit(1719332778.986:765): pid=5219 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:26:19.025625 kernel: audit: type=1300 audit(1719332778.986:765): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff04f087a0 a2=3 a3=7f83038e0480 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:19.025690 kernel: audit: type=1327 audit(1719332778.986:765): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:18.986000 audit[5219]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff04f087a0 a2=3 a3=7f83038e0480 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:18.986000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:19.025182 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:26:19.031000 audit[5219]: USER_START pid=5219 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:19.031000 audit[5221]: CRED_ACQ pid=5221 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:19.040228 kernel: audit: type=1105 audit(1719332779.031:766): pid=5219 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:19.040355 kernel: audit: type=1103 audit(1719332779.031:767): pid=5221 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:19.149782 sshd[5219]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:19.150000 audit[5219]: USER_END pid=5219 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:19.153328 systemd[1]: sshd@24-10.0.0.104:22-10.0.0.1:54830.service: Deactivated successfully. Jun 25 16:26:19.154316 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:26:19.155572 systemd-logind[1278]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:26:19.156506 systemd-logind[1278]: Removed session 25. Jun 25 16:26:19.150000 audit[5219]: CRED_DISP pid=5219 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:19.170154 kernel: audit: type=1106 audit(1719332779.150:768): pid=5219 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:19.170297 kernel: audit: type=1104 audit(1719332779.150:769): pid=5219 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:19.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.104:22-10.0.0.1:54830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.734805 kubelet[2308]: E0625 16:26:19.734756 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:26:22.896408 kubelet[2308]: E0625 16:26:22.896379 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:26:24.161020 systemd[1]: Started sshd@25-10.0.0.104:22-10.0.0.1:54832.service - OpenSSH per-connection server daemon (10.0.0.1:54832). Jun 25 16:26:24.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.104:22-10.0.0.1:54832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.162082 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:26:24.162145 kernel: audit: type=1130 audit(1719332784.160:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.104:22-10.0.0.1:54832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.201000 audit[5274]: USER_ACCT pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:24.202942 sshd[5274]: Accepted publickey for core from 10.0.0.1 port 54832 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:24.242104 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:24.202000 audit[5274]: CRED_ACQ pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:24.246468 systemd-logind[1278]: New session 26 of user core. Jun 25 16:26:24.248595 kernel: audit: type=1101 audit(1719332784.201:772): pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:24.248637 kernel: audit: type=1103 audit(1719332784.202:773): pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:24.248664 kernel: audit: type=1006 audit(1719332784.202:774): pid=5274 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 16:26:24.202000 audit[5274]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda4351f40 a2=3 a3=7fcb32c35480 items=0 ppid=1 pid=5274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:24.254738 kernel: audit: type=1300 audit(1719332784.202:774): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda4351f40 a2=3 a3=7fcb32c35480 items=0 ppid=1 pid=5274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:24.255189 kernel: audit: type=1327 audit(1719332784.202:774): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:24.202000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:24.262197 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 16:26:24.266000 audit[5274]: USER_START pid=5274 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:24.268000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:24.282257 kernel: audit: type=1105 audit(1719332784.266:775): pid=5274 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:24.282311 kernel: audit: type=1103 audit(1719332784.268:776): pid=5276 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:24.736986 sshd[5274]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:24.736000 audit[5274]: USER_END pid=5274 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:24.739405 systemd[1]: sshd@25-10.0.0.104:22-10.0.0.1:54832.service: Deactivated successfully. Jun 25 16:26:24.740311 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 16:26:24.740841 systemd-logind[1278]: Session 26 logged out. Waiting for processes to exit. Jun 25 16:26:24.741568 systemd-logind[1278]: Removed session 26. Jun 25 16:26:24.736000 audit[5274]: CRED_DISP pid=5274 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:24.769628 kernel: audit: type=1106 audit(1719332784.736:777): pid=5274 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:24.769724 kernel: audit: type=1104 audit(1719332784.736:778): pid=5274 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:24.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.104:22-10.0.0.1:54832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:27.651000 audit[5292]: NETFILTER_CFG table=filter:112 family=2 entries=9 op=nft_register_rule pid=5292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:27.651000 audit[5292]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd228b8c50 a2=0 a3=7ffd228b8c3c items=0 ppid=2514 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:27.651000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:27.652000 audit[5292]: NETFILTER_CFG table=nat:113 family=2 entries=20 op=nft_register_rule pid=5292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:27.652000 audit[5292]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd228b8c50 a2=0 a3=7ffd228b8c3c items=0 ppid=2514 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:27.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:27.676000 audit[5294]: NETFILTER_CFG table=filter:114 family=2 entries=10 op=nft_register_rule pid=5294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:27.676000 audit[5294]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc67759830 a2=0 a3=7ffc6775981c items=0 ppid=2514 pid=5294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:27.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:27.683000 audit[5294]: NETFILTER_CFG table=nat:115 family=2 entries=20 op=nft_register_rule pid=5294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:27.683000 audit[5294]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc67759830 a2=0 a3=7ffc6775981c items=0 ppid=2514 pid=5294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:27.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:28.137388 kubelet[2308]: I0625 16:26:28.137304 2308 topology_manager.go:215] "Topology Admit Handler" podUID="e53ebd8a-93cd-4ab1-b438-03ade01b802c" podNamespace="calico-apiserver" podName="calico-apiserver-67c5f9d8d8-q8cpw" Jun 25 16:26:28.143879 systemd[1]: Created slice kubepods-besteffort-pode53ebd8a_93cd_4ab1_b438_03ade01b802c.slice - libcontainer container kubepods-besteffort-pode53ebd8a_93cd_4ab1_b438_03ade01b802c.slice. Jun 25 16:26:28.260457 kubelet[2308]: I0625 16:26:28.260309 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e53ebd8a-93cd-4ab1-b438-03ade01b802c-calico-apiserver-certs\") pod \"calico-apiserver-67c5f9d8d8-q8cpw\" (UID: \"e53ebd8a-93cd-4ab1-b438-03ade01b802c\") " pod="calico-apiserver/calico-apiserver-67c5f9d8d8-q8cpw" Jun 25 16:26:28.260690 kubelet[2308]: I0625 16:26:28.260466 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dvtp\" (UniqueName: \"kubernetes.io/projected/e53ebd8a-93cd-4ab1-b438-03ade01b802c-kube-api-access-6dvtp\") pod \"calico-apiserver-67c5f9d8d8-q8cpw\" (UID: \"e53ebd8a-93cd-4ab1-b438-03ade01b802c\") " pod="calico-apiserver/calico-apiserver-67c5f9d8d8-q8cpw" Jun 25 16:26:28.367200 kubelet[2308]: E0625 16:26:28.367153 2308 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:26:28.367359 kubelet[2308]: E0625 16:26:28.367230 2308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e53ebd8a-93cd-4ab1-b438-03ade01b802c-calico-apiserver-certs podName:e53ebd8a-93cd-4ab1-b438-03ade01b802c nodeName:}" failed. No retries permitted until 2024-06-25 16:26:28.867210478 +0000 UTC m=+136.211870511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e53ebd8a-93cd-4ab1-b438-03ade01b802c-calico-apiserver-certs") pod "calico-apiserver-67c5f9d8d8-q8cpw" (UID: "e53ebd8a-93cd-4ab1-b438-03ade01b802c") : secret "calico-apiserver-certs" not found Jun 25 16:26:28.966056 kubelet[2308]: E0625 16:26:28.965996 2308 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:26:28.966056 kubelet[2308]: E0625 16:26:28.966075 2308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e53ebd8a-93cd-4ab1-b438-03ade01b802c-calico-apiserver-certs podName:e53ebd8a-93cd-4ab1-b438-03ade01b802c nodeName:}" failed. No retries permitted until 2024-06-25 16:26:29.966058693 +0000 UTC m=+137.310718726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e53ebd8a-93cd-4ab1-b438-03ade01b802c-calico-apiserver-certs") pod "calico-apiserver-67c5f9d8d8-q8cpw" (UID: "e53ebd8a-93cd-4ab1-b438-03ade01b802c") : secret "calico-apiserver-certs" not found Jun 25 16:26:29.533878 systemd[1]: Started sshd@26-10.0.0.104:22-10.0.0.1:53340.service - OpenSSH per-connection server daemon (10.0.0.1:53340). Jun 25 16:26:29.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.104:22-10.0.0.1:53340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:29.535380 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:26:29.535464 kernel: audit: type=1130 audit(1719332789.533:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.104:22-10.0.0.1:53340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:29.569000 audit[5299]: USER_ACCT pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:29.570617 sshd[5299]: Accepted publickey for core from 10.0.0.1 port 53340 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:29.571549 sshd[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:29.574871 systemd-logind[1278]: New session 27 of user core. Jun 25 16:26:29.570000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:29.585216 kernel: audit: type=1101 audit(1719332789.569:785): pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:29.585266 kernel: audit: type=1103 audit(1719332789.570:786): pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:29.585289 kernel: audit: type=1006 audit(1719332789.570:787): pid=5299 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 16:26:29.587125 kernel: audit: type=1300 audit(1719332789.570:787): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefd344400 a2=3 a3=7f6d6f201480 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:29.570000 audit[5299]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefd344400 a2=3 a3=7f6d6f201480 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:29.570000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:29.591514 kernel: audit: type=1327 audit(1719332789.570:787): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:29.593077 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 16:26:29.597000 audit[5299]: USER_START pid=5299 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:29.655852 kernel: audit: type=1105 audit(1719332789.597:788): pid=5299 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:29.655974 kernel: audit: type=1103 audit(1719332789.598:789): pid=5301 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:29.598000 audit[5301]: CRED_ACQ pid=5301 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:29.807679 sshd[5299]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:29.808000 audit[5299]: USER_END pid=5299 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:29.811193 systemd[1]: sshd@26-10.0.0.104:22-10.0.0.1:53340.service: Deactivated successfully. Jun 25 16:26:29.812035 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 16:26:29.808000 audit[5299]: CRED_DISP pid=5299 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:29.815488 systemd-logind[1278]: Session 27 logged out. Waiting for processes to exit. Jun 25 16:26:29.816315 systemd-logind[1278]: Removed session 27. Jun 25 16:26:29.816551 kernel: audit: type=1106 audit(1719332789.808:790): pid=5299 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:29.816610 kernel: audit: type=1104 audit(1719332789.808:791): pid=5299 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:29.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.104:22-10.0.0.1:53340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.247223 containerd[1293]: time="2024-06-25T16:26:30.247175445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c5f9d8d8-q8cpw,Uid:e53ebd8a-93cd-4ab1-b438-03ade01b802c,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:26:31.027148 systemd-networkd[1116]: cali6cf6c1f3057: Link UP Jun 25 16:26:31.163284 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:26:31.163459 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6cf6c1f3057: link becomes ready Jun 25 16:26:31.163771 systemd-networkd[1116]: cali6cf6c1f3057: Gained carrier Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.901 [INFO][5315] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0 calico-apiserver-67c5f9d8d8- calico-apiserver e53ebd8a-93cd-4ab1-b438-03ade01b802c 1192 0 2024-06-25 16:26:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67c5f9d8d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67c5f9d8d8-q8cpw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6cf6c1f3057 [] []}} ContainerID="5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" Namespace="calico-apiserver" Pod="calico-apiserver-67c5f9d8d8-q8cpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-" Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.901 [INFO][5315] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" Namespace="calico-apiserver" Pod="calico-apiserver-67c5f9d8d8-q8cpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0" Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.931 [INFO][5328] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" HandleID="k8s-pod-network.5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" Workload="localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0" Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.938 [INFO][5328] ipam_plugin.go 264: Auto assigning IP ContainerID="5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" HandleID="k8s-pod-network.5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" Workload="localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5670), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67c5f9d8d8-q8cpw", "timestamp":"2024-06-25 16:26:30.931614784 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.938 [INFO][5328] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.938 [INFO][5328] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.938 [INFO][5328] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.940 [INFO][5328] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" host="localhost" Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.943 [INFO][5328] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.948 [INFO][5328] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.949 [INFO][5328] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.952 [INFO][5328] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.952 [INFO][5328] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" host="localhost" Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.953 [INFO][5328] ipam.go 1685: Creating new handle: k8s-pod-network.5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30 Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:30.956 [INFO][5328] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" host="localhost" Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:31.007 [INFO][5328] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" host="localhost" Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:31.007 [INFO][5328] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" host="localhost" Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:31.007 [INFO][5328] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:26:31.196031 containerd[1293]: 2024-06-25 16:26:31.007 [INFO][5328] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" HandleID="k8s-pod-network.5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" Workload="localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0" Jun 25 16:26:31.196984 containerd[1293]: 2024-06-25 16:26:31.024 [INFO][5315] k8s.go 386: Populated endpoint ContainerID="5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" Namespace="calico-apiserver" Pod="calico-apiserver-67c5f9d8d8-q8cpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0", GenerateName:"calico-apiserver-67c5f9d8d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e53ebd8a-93cd-4ab1-b438-03ade01b802c", ResourceVersion:"1192", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c5f9d8d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67c5f9d8d8-q8cpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cf6c1f3057", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:26:31.196984 containerd[1293]: 2024-06-25 16:26:31.024 [INFO][5315] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" Namespace="calico-apiserver" Pod="calico-apiserver-67c5f9d8d8-q8cpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0" Jun 25 16:26:31.196984 containerd[1293]: 2024-06-25 16:26:31.024 [INFO][5315] dataplane_linux.go 68: Setting the host side veth name to cali6cf6c1f3057 ContainerID="5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" Namespace="calico-apiserver" Pod="calico-apiserver-67c5f9d8d8-q8cpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0" Jun 25 16:26:31.196984 containerd[1293]: 2024-06-25 16:26:31.164 [INFO][5315] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" Namespace="calico-apiserver" Pod="calico-apiserver-67c5f9d8d8-q8cpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0" Jun 25 16:26:31.196984 containerd[1293]: 2024-06-25 16:26:31.164 [INFO][5315] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" Namespace="calico-apiserver" Pod="calico-apiserver-67c5f9d8d8-q8cpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0", GenerateName:"calico-apiserver-67c5f9d8d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e53ebd8a-93cd-4ab1-b438-03ade01b802c", ResourceVersion:"1192", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 26, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c5f9d8d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30", Pod:"calico-apiserver-67c5f9d8d8-q8cpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6cf6c1f3057", MAC:"c2:3c:de:66:e0:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:26:31.196984 containerd[1293]: 2024-06-25 16:26:31.193 [INFO][5315] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30" Namespace="calico-apiserver" Pod="calico-apiserver-67c5f9d8d8-q8cpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c5f9d8d8--q8cpw-eth0" Jun 25 16:26:31.211000 audit[5352]: NETFILTER_CFG table=filter:116 family=2 entries=55 op=nft_register_chain pid=5352 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:26:31.211000 audit[5352]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7fff1a8e5f40 a2=0 a3=7fff1a8e5f2c items=0 ppid=3935 pid=5352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:31.211000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:26:31.221741 containerd[1293]: time="2024-06-25T16:26:31.221641581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:26:31.221741 containerd[1293]: time="2024-06-25T16:26:31.221695953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:31.221985 containerd[1293]: time="2024-06-25T16:26:31.221719819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:26:31.221985 containerd[1293]: time="2024-06-25T16:26:31.221734055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:26:31.246733 systemd[1]: run-containerd-runc-k8s.io-5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30-runc.liGT09.mount: Deactivated successfully. Jun 25 16:26:31.253068 systemd[1]: Started cri-containerd-5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30.scope - libcontainer container 5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30. Jun 25 16:26:31.261000 audit: BPF prog-id=180 op=LOAD Jun 25 16:26:31.262000 audit: BPF prog-id=181 op=LOAD Jun 25 16:26:31.262000 audit[5371]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001ab988 a2=78 a3=0 items=0 ppid=5362 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:31.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532303462653064343138323064613036343766616331386331303438 Jun 25 16:26:31.262000 audit: BPF prog-id=182 op=LOAD Jun 25 16:26:31.262000 audit[5371]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001ab720 a2=78 a3=0 items=0 ppid=5362 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:31.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532303462653064343138323064613036343766616331386331303438 Jun 25 16:26:31.262000 audit: BPF prog-id=182 op=UNLOAD Jun 25 16:26:31.262000 audit: BPF prog-id=181 op=UNLOAD Jun 25 16:26:31.262000 audit: BPF prog-id=183 op=LOAD Jun 25 16:26:31.262000 audit[5371]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001abbe0 a2=78 a3=0 items=0 ppid=5362 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:31.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532303462653064343138323064613036343766616331386331303438 Jun 25 16:26:31.264046 systemd-resolved[1234]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:26:31.292031 containerd[1293]: time="2024-06-25T16:26:31.291893862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c5f9d8d8-q8cpw,Uid:e53ebd8a-93cd-4ab1-b438-03ade01b802c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30\"" Jun 25 16:26:31.294083 containerd[1293]: time="2024-06-25T16:26:31.294042762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:26:32.763405 systemd-networkd[1116]: cali6cf6c1f3057: Gained IPv6LL Jun 25 16:26:34.829899 systemd[1]: Started sshd@27-10.0.0.104:22-10.0.0.1:53354.service - OpenSSH per-connection server daemon (10.0.0.1:53354). Jun 25 16:26:34.835892 kernel: kauditd_printk_skb: 16 callbacks suppressed Jun 25 16:26:34.836104 kernel: audit: type=1130 audit(1719332794.829:800): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.104:22-10.0.0.1:53354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:34.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.104:22-10.0.0.1:53354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:34.880000 audit[5421]: USER_ACCT pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:34.883051 sshd[5421]: Accepted publickey for core from 10.0.0.1 port 53354 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:34.883194 sshd[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:34.888885 kernel: audit: type=1101 audit(1719332794.880:801): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:34.889021 kernel: audit: type=1103 audit(1719332794.881:802): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:34.889056 kernel: audit: type=1006 audit(1719332794.881:803): pid=5421 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jun 25 16:26:34.881000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:34.890960 kernel: audit: type=1300 audit(1719332794.881:803): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5f2098d0 a2=3 a3=7f5d5b283480 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:34.881000 audit[5421]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5f2098d0 a2=3 a3=7f5d5b283480 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:34.890520 systemd-logind[1278]: New session 28 of user core. Jun 25 16:26:34.907203 kernel: audit: type=1327 audit(1719332794.881:803): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:34.881000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:34.907114 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 16:26:34.912000 audit[5421]: USER_START pid=5421 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:34.914000 audit[5423]: CRED_ACQ pid=5423 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:34.922971 kernel: audit: type=1105 audit(1719332794.912:804): pid=5421 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:34.923039 kernel: audit: type=1103 audit(1719332794.914:805): pid=5423 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.077912 sshd[5421]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:35.078000 audit[5421]: USER_END pid=5421 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.078000 audit[5421]: CRED_DISP pid=5421 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.087270 kernel: audit: type=1106 audit(1719332795.078:806): pid=5421 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.087371 kernel: audit: type=1104 audit(1719332795.078:807): pid=5421 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.092470 systemd[1]: sshd@27-10.0.0.104:22-10.0.0.1:53354.service: Deactivated successfully. Jun 25 16:26:35.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.104:22-10.0.0.1:53354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:35.093059 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 16:26:35.093618 systemd-logind[1278]: Session 28 logged out. Waiting for processes to exit. Jun 25 16:26:35.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.104:22-10.0.0.1:53362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:35.100231 systemd[1]: Started sshd@28-10.0.0.104:22-10.0.0.1:53362.service - OpenSSH per-connection server daemon (10.0.0.1:53362). Jun 25 16:26:35.101031 systemd-logind[1278]: Removed session 28. Jun 25 16:26:35.131000 audit[5436]: USER_ACCT pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.132377 sshd[5436]: Accepted publickey for core from 10.0.0.1 port 53362 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:35.132000 audit[5436]: CRED_ACQ pid=5436 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.132000 audit[5436]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd96494570 a2=3 a3=7f7b62355480 items=0 ppid=1 pid=5436 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:35.132000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:35.133502 sshd[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:35.137568 systemd-logind[1278]: New session 29 of user core. Jun 25 16:26:35.146114 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 16:26:35.149000 audit[5436]: USER_START pid=5436 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.150000 audit[5438]: CRED_ACQ pid=5438 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.548694 containerd[1293]: time="2024-06-25T16:26:35.548533744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:35.550682 containerd[1293]: time="2024-06-25T16:26:35.550610197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:26:35.556752 containerd[1293]: time="2024-06-25T16:26:35.556636473Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:35.569995 containerd[1293]: time="2024-06-25T16:26:35.569902002Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:35.573616 containerd[1293]: time="2024-06-25T16:26:35.573540960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:35.574722 containerd[1293]: time="2024-06-25T16:26:35.574661992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.28056549s" Jun 25 16:26:35.574722 containerd[1293]: time="2024-06-25T16:26:35.574713379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:26:35.581090 containerd[1293]: time="2024-06-25T16:26:35.581037256Z" level=info msg="CreateContainer within sandbox \"5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:26:35.642061 containerd[1293]: time="2024-06-25T16:26:35.641992786Z" level=info msg="CreateContainer within sandbox \"5204be0d41820da0647fac18c104827bf0997d11e2b6b81284123f4a52958c30\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"640e0db58460e4e5f7b62d9f817e454d651267108bb78a2e689de079b0834c3f\"" Jun 25 16:26:35.643036 containerd[1293]: time="2024-06-25T16:26:35.642981690Z" level=info msg="StartContainer for \"640e0db58460e4e5f7b62d9f817e454d651267108bb78a2e689de079b0834c3f\"" Jun 25 16:26:35.690182 systemd[1]: Started cri-containerd-640e0db58460e4e5f7b62d9f817e454d651267108bb78a2e689de079b0834c3f.scope - libcontainer container 640e0db58460e4e5f7b62d9f817e454d651267108bb78a2e689de079b0834c3f. Jun 25 16:26:35.709000 audit: BPF prog-id=184 op=LOAD Jun 25 16:26:35.710000 audit: BPF prog-id=185 op=LOAD Jun 25 16:26:35.710000 audit[5463]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=5362 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:35.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634306530646235383436306534653566376236326439663831376534 Jun 25 16:26:35.710000 audit: BPF prog-id=186 op=LOAD Jun 25 16:26:35.710000 audit[5463]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=5362 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:35.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634306530646235383436306534653566376236326439663831376534 Jun 25 16:26:35.710000 audit: BPF prog-id=186 op=UNLOAD Jun 25 16:26:35.710000 audit: BPF prog-id=185 op=UNLOAD Jun 25 16:26:35.710000 audit: BPF prog-id=187 op=LOAD Jun 25 16:26:35.710000 audit[5463]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=5362 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:35.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634306530646235383436306534653566376236326439663831376534 Jun 25 16:26:35.802804 containerd[1293]: time="2024-06-25T16:26:35.802647818Z" level=info msg="StartContainer for \"640e0db58460e4e5f7b62d9f817e454d651267108bb78a2e689de079b0834c3f\" returns successfully" Jun 25 16:26:35.821998 sshd[5436]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:35.822000 audit[5436]: USER_END pid=5436 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.822000 audit[5436]: CRED_DISP pid=5436 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.830379 systemd[1]: sshd@28-10.0.0.104:22-10.0.0.1:53362.service: Deactivated successfully. Jun 25 16:26:35.830944 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 16:26:35.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.104:22-10.0.0.1:53362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:35.831680 systemd-logind[1278]: Session 29 logged out. Waiting for processes to exit. Jun 25 16:26:35.838172 systemd[1]: Started sshd@29-10.0.0.104:22-10.0.0.1:53376.service - OpenSSH per-connection server daemon (10.0.0.1:53376). Jun 25 16:26:35.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.104:22-10.0.0.1:53376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:35.839440 systemd-logind[1278]: Removed session 29. Jun 25 16:26:35.872000 audit[5497]: USER_ACCT pid=5497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.873215 sshd[5497]: Accepted publickey for core from 10.0.0.1 port 53376 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:35.873000 audit[5497]: CRED_ACQ pid=5497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.873000 audit[5497]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd8479420 a2=3 a3=7f590df96480 items=0 ppid=1 pid=5497 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:35.873000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:35.874529 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:35.879486 systemd-logind[1278]: New session 30 of user core. Jun 25 16:26:35.885981 systemd[1]: Started session-30.scope - Session 30 of User core. Jun 25 16:26:35.889000 audit[5497]: USER_START pid=5497 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:35.891000 audit[5500]: CRED_ACQ pid=5500 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:36.264616 kubelet[2308]: I0625 16:26:36.264545 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67c5f9d8d8-q8cpw" podStartSLOduration=4.979429266 podStartE2EDuration="9.264525965s" podCreationTimestamp="2024-06-25 16:26:27 +0000 UTC" firstStartedPulling="2024-06-25 16:26:31.293436218 +0000 UTC m=+138.638096251" lastFinishedPulling="2024-06-25 16:26:35.578532907 +0000 UTC m=+142.923192950" observedRunningTime="2024-06-25 16:26:36.263344459 +0000 UTC m=+143.608004512" watchObservedRunningTime="2024-06-25 16:26:36.264525965 +0000 UTC m=+143.609185988" Jun 25 16:26:36.292000 audit[5510]: NETFILTER_CFG table=filter:117 family=2 entries=10 op=nft_register_rule pid=5510 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:36.292000 audit[5510]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe387a8f70 a2=0 a3=7ffe387a8f5c items=0 ppid=2514 pid=5510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:36.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:36.292000 audit[5510]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=5510 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:36.292000 audit[5510]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe387a8f70 a2=0 a3=7ffe387a8f5c items=0 ppid=2514 pid=5510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:36.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:37.682000 audit[5515]: NETFILTER_CFG table=filter:119 family=2 entries=9 op=nft_register_rule pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:37.682000 audit[5515]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffff0b119e0 a2=0 a3=7ffff0b119cc items=0 ppid=2514 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:37.682000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:37.683000 audit[5515]: NETFILTER_CFG table=nat:120 family=2 entries=27 op=nft_register_chain pid=5515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:37.683000 audit[5515]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffff0b119e0 a2=0 a3=7ffff0b119cc items=0 ppid=2514 pid=5515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:37.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:38.949000 audit[5517]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:38.949000 audit[5517]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffd4da588f0 a2=0 a3=7ffd4da588dc items=0 ppid=2514 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:38.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:38.950000 audit[5517]: NETFILTER_CFG table=nat:122 family=2 entries=22 op=nft_register_rule pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:38.950000 audit[5517]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd4da588f0 a2=0 a3=0 items=0 ppid=2514 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:38.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:39.039000 audit[5519]: NETFILTER_CFG table=filter:123 family=2 entries=32 op=nft_register_rule pid=5519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:39.039000 audit[5519]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc714bcc70 a2=0 a3=7ffc714bcc5c items=0 ppid=2514 pid=5519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:39.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:39.040000 audit[5519]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=5519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:39.040000 audit[5519]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc714bcc70 a2=0 a3=0 items=0 ppid=2514 pid=5519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:39.040000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:40.065131 sshd[5497]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:40.066000 audit[5497]: USER_END pid=5497 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:40.108280 kernel: kauditd_printk_skb: 56 callbacks suppressed Jun 25 16:26:40.108557 kernel: audit: type=1106 audit(1719332800.066:838): pid=5497 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:40.066000 audit[5497]: CRED_DISP pid=5497 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:40.116413 kernel: audit: type=1104 audit(1719332800.066:839): pid=5497 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:40.120439 systemd[1]: sshd@29-10.0.0.104:22-10.0.0.1:53376.service: Deactivated successfully. Jun 25 16:26:40.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.104:22-10.0.0.1:53376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:40.121324 systemd[1]: session-30.scope: Deactivated successfully. Jun 25 16:26:40.122574 systemd-logind[1278]: Session 30 logged out. Waiting for processes to exit. Jun 25 16:26:40.124371 systemd[1]: Started sshd@30-10.0.0.104:22-10.0.0.1:33340.service - OpenSSH per-connection server daemon (10.0.0.1:33340). Jun 25 16:26:40.124844 kernel: audit: type=1131 audit(1719332800.119:840): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.104:22-10.0.0.1:53376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:40.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.104:22-10.0.0.1:33340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:40.125474 systemd-logind[1278]: Removed session 30. Jun 25 16:26:40.128871 kernel: audit: type=1130 audit(1719332800.123:841): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.104:22-10.0.0.1:33340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:40.162000 audit[5522]: USER_ACCT pid=5522 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:40.163223 sshd[5522]: Accepted publickey for core from 10.0.0.1 port 33340 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:40.164303 sshd[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:40.163000 audit[5522]: CRED_ACQ pid=5522 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:40.197948 systemd-logind[1278]: New session 31 of user core. Jun 25 16:26:40.200074 kernel: audit: type=1101 audit(1719332800.162:842): pid=5522 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:40.200123 kernel: audit: type=1103 audit(1719332800.163:843): pid=5522 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:40.200143 kernel: audit: type=1006 audit(1719332800.163:844): pid=5522 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jun 25 16:26:40.163000 audit[5522]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1a542270 a2=3 a3=7f696b8ff480 items=0 ppid=1 pid=5522 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:40.206353 kernel: audit: type=1300 audit(1719332800.163:844): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1a542270 a2=3 a3=7f696b8ff480 items=0 ppid=1 pid=5522 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:40.206386 kernel: audit: type=1327 audit(1719332800.163:844): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:40.163000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:40.212044 systemd[1]: Started session-31.scope - Session 31 of User core. Jun 25 16:26:40.215000 audit[5522]: USER_START pid=5522 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:40.216000 audit[5524]: CRED_ACQ pid=5524 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:40.220847 kernel: audit: type=1105 audit(1719332800.215:845): pid=5522 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:41.146000 audit[5532]: NETFILTER_CFG table=filter:125 family=2 entries=32 op=nft_register_rule pid=5532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:41.146000 audit[5532]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffe2e6245f0 a2=0 a3=7ffe2e6245dc items=0 ppid=2514 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.146000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:41.147000 audit[5532]: NETFILTER_CFG table=nat:126 family=2 entries=30 op=nft_register_rule pid=5532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:41.147000 audit[5532]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe2e6245f0 a2=0 a3=0 items=0 ppid=2514 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:41.401214 sshd[5522]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:41.401000 audit[5522]: USER_END pid=5522 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:41.401000 audit[5522]: CRED_DISP pid=5522 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:41.410548 systemd[1]: sshd@30-10.0.0.104:22-10.0.0.1:33340.service: Deactivated successfully. Jun 25 16:26:41.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.104:22-10.0.0.1:33340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:41.411297 systemd[1]: session-31.scope: Deactivated successfully. Jun 25 16:26:41.411895 systemd-logind[1278]: Session 31 logged out. Waiting for processes to exit. Jun 25 16:26:41.413323 systemd[1]: Started sshd@31-10.0.0.104:22-10.0.0.1:33348.service - OpenSSH per-connection server daemon (10.0.0.1:33348). Jun 25 16:26:41.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.104:22-10.0.0.1:33348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:41.414318 systemd-logind[1278]: Removed session 31. Jun 25 16:26:41.447000 audit[5537]: USER_ACCT pid=5537 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:41.448112 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 33348 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:41.448000 audit[5537]: CRED_ACQ pid=5537 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:41.448000 audit[5537]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd23dce30 a2=3 a3=7f92c4ae6480 items=0 ppid=1 pid=5537 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.448000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:41.449405 sshd[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:41.453034 systemd-logind[1278]: New session 32 of user core. Jun 25 16:26:41.462041 systemd[1]: Started session-32.scope - Session 32 of User core. Jun 25 16:26:41.466000 audit[5537]: USER_START pid=5537 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:41.467000 audit[5539]: CRED_ACQ pid=5539 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:41.590086 sshd[5537]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:41.590000 audit[5537]: USER_END pid=5537 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:41.590000 audit[5537]: CRED_DISP pid=5537 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:41.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.104:22-10.0.0.1:33348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:41.592386 systemd[1]: sshd@31-10.0.0.104:22-10.0.0.1:33348.service: Deactivated successfully. Jun 25 16:26:41.593319 systemd[1]: session-32.scope: Deactivated successfully. Jun 25 16:26:41.594677 systemd-logind[1278]: Session 32 logged out. Waiting for processes to exit. Jun 25 16:26:41.595506 systemd-logind[1278]: Removed session 32. Jun 25 16:26:41.651000 audit[5550]: NETFILTER_CFG table=filter:127 family=2 entries=32 op=nft_register_rule pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:41.651000 audit[5550]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc85884850 a2=0 a3=7ffc8588483c items=0 ppid=2514 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.651000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:41.652000 audit[5550]: NETFILTER_CFG table=nat:128 family=2 entries=34 op=nft_register_chain pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:41.652000 audit[5550]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffc85884850 a2=0 a3=7ffc8588483c items=0 ppid=2514 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:41.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:46.602649 systemd[1]: Started sshd@32-10.0.0.104:22-10.0.0.1:34456.service - OpenSSH per-connection server daemon (10.0.0.1:34456). Jun 25 16:26:46.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.104:22-10.0.0.1:34456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:46.603459 kernel: kauditd_printk_skb: 27 callbacks suppressed Jun 25 16:26:46.603498 kernel: audit: type=1130 audit(1719332806.601:863): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.104:22-10.0.0.1:34456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:46.634000 audit[5558]: USER_ACCT pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:46.635235 sshd[5558]: Accepted publickey for core from 10.0.0.1 port 34456 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:47.991319 kernel: audit: type=1101 audit(1719332806.634:864): pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:47.991474 kernel: audit: type=1103 audit(1719332806.711:865): pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:47.991505 kernel: audit: type=1006 audit(1719332806.711:866): pid=5558 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jun 25 16:26:47.991523 kernel: audit: type=1300 audit(1719332806.711:866): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe5167340 a2=3 a3=7fc444c13480 items=0 ppid=1 pid=5558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:47.991541 kernel: audit: type=1327 audit(1719332806.711:866): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:47.991559 kernel: audit: type=1105 audit(1719332806.808:867): pid=5558 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:47.991578 kernel: audit: type=1103 audit(1719332806.809:868): pid=5560 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:46.711000 audit[5558]: CRED_ACQ pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:46.711000 audit[5558]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe5167340 a2=3 a3=7fc444c13480 items=0 ppid=1 pid=5558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:46.711000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:46.808000 audit[5558]: USER_START pid=5558 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:46.809000 audit[5560]: CRED_ACQ pid=5560 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:46.713138 sshd[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:46.719503 systemd-logind[1278]: New session 33 of user core. Jun 25 16:26:46.804143 systemd[1]: Started session-33.scope - Session 33 of User core. Jun 25 16:26:48.096240 sshd[5558]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:48.096000 audit[5558]: USER_END pid=5558 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:48.099125 systemd[1]: sshd@32-10.0.0.104:22-10.0.0.1:34456.service: Deactivated successfully. Jun 25 16:26:48.099798 systemd[1]: session-33.scope: Deactivated successfully. Jun 25 16:26:48.100896 systemd-logind[1278]: Session 33 logged out. Waiting for processes to exit. Jun 25 16:26:48.101706 systemd-logind[1278]: Removed session 33. Jun 25 16:26:48.118917 kernel: audit: type=1106 audit(1719332808.096:869): pid=5558 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:48.119059 kernel: audit: type=1104 audit(1719332808.096:870): pid=5558 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:48.096000 audit[5558]: CRED_DISP pid=5558 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:48.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.104:22-10.0.0.1:34456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:48.734723 kubelet[2308]: E0625 16:26:48.734687 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:26:53.110148 systemd[1]: Started sshd@33-10.0.0.104:22-10.0.0.1:34470.service - OpenSSH per-connection server daemon (10.0.0.1:34470). Jun 25 16:26:53.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.104:22-10.0.0.1:34470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:53.137115 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:26:53.137176 kernel: audit: type=1130 audit(1719332813.109:872): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.104:22-10.0.0.1:34470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:53.165000 audit[5602]: USER_ACCT pid=5602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:53.167059 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 34470 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:53.168152 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:53.166000 audit[5602]: CRED_ACQ pid=5602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:53.173841 kernel: audit: type=1101 audit(1719332813.165:873): pid=5602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:53.173906 kernel: audit: type=1103 audit(1719332813.166:874): pid=5602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:53.176139 kernel: audit: type=1006 audit(1719332813.167:875): pid=5602 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jun 25 16:26:53.167000 audit[5602]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed9d26640 a2=3 a3=7f3dd0e14480 items=0 ppid=1 pid=5602 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:53.180054 kernel: audit: type=1300 audit(1719332813.167:875): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed9d26640 a2=3 a3=7f3dd0e14480 items=0 ppid=1 pid=5602 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:53.180088 kernel: audit: type=1327 audit(1719332813.167:875): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:53.167000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:53.176514 systemd-logind[1278]: New session 34 of user core. Jun 25 16:26:53.184057 systemd[1]: Started session-34.scope - Session 34 of User core. Jun 25 16:26:53.189000 audit[5602]: USER_START pid=5602 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:53.198557 kernel: audit: type=1105 audit(1719332813.189:876): pid=5602 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:53.198713 kernel: audit: type=1103 audit(1719332813.191:877): pid=5604 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:53.191000 audit[5604]: CRED_ACQ pid=5604 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:53.309612 sshd[5602]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:53.309000 audit[5602]: USER_END pid=5602 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:53.312889 systemd[1]: sshd@33-10.0.0.104:22-10.0.0.1:34470.service: Deactivated successfully. Jun 25 16:26:53.313625 systemd[1]: session-34.scope: Deactivated successfully. Jun 25 16:26:53.314349 systemd-logind[1278]: Session 34 logged out. Waiting for processes to exit. Jun 25 16:26:53.315165 systemd-logind[1278]: Removed session 34. Jun 25 16:26:53.310000 audit[5602]: CRED_DISP pid=5602 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:53.370491 kernel: audit: type=1106 audit(1719332813.309:878): pid=5602 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:53.370689 kernel: audit: type=1104 audit(1719332813.310:879): pid=5602 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:53.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.104:22-10.0.0.1:34470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:55.307000 audit[5615]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5615 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:55.307000 audit[5615]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffec739dc90 a2=0 a3=7ffec739dc7c items=0 ppid=2514 pid=5615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:55.307000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:55.309000 audit[5615]: NETFILTER_CFG table=nat:130 family=2 entries=106 op=nft_register_chain pid=5615 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:55.309000 audit[5615]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffec739dc90 a2=0 a3=7ffec739dc7c items=0 ppid=2514 pid=5615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:55.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:56.734939 kubelet[2308]: E0625 16:26:56.734878 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:26:57.734192 kubelet[2308]: E0625 16:26:57.734147 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:26:58.321506 systemd[1]: Started sshd@34-10.0.0.104:22-10.0.0.1:52390.service - OpenSSH per-connection server daemon (10.0.0.1:52390). Jun 25 16:26:58.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.104:22-10.0.0.1:52390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:58.322413 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:26:58.322458 kernel: audit: type=1130 audit(1719332818.320:883): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.104:22-10.0.0.1:52390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:58.356000 audit[5623]: USER_ACCT pid=5623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:58.358061 sshd[5623]: Accepted publickey for core from 10.0.0.1 port 52390 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:26:58.359254 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:58.363081 systemd-logind[1278]: New session 35 of user core. Jun 25 16:26:58.357000 audit[5623]: CRED_ACQ pid=5623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:58.376428 kernel: audit: type=1101 audit(1719332818.356:884): pid=5623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:58.376591 kernel: audit: type=1103 audit(1719332818.357:885): pid=5623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:58.376622 kernel: audit: type=1006 audit(1719332818.358:886): pid=5623 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jun 25 16:26:58.358000 audit[5623]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7f8b65d0 a2=3 a3=7fe34e314480 items=0 ppid=1 pid=5623 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.382429 kernel: audit: type=1300 audit(1719332818.358:886): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7f8b65d0 a2=3 a3=7fe34e314480 items=0 ppid=1 pid=5623 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:58.382478 kernel: audit: type=1327 audit(1719332818.358:886): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:58.358000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:58.400195 systemd[1]: Started session-35.scope - Session 35 of User core. Jun 25 16:26:58.403000 audit[5623]: USER_START pid=5623 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:58.405000 audit[5625]: CRED_ACQ pid=5625 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:58.413976 kernel: audit: type=1105 audit(1719332818.403:887): pid=5623 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:58.414044 kernel: audit: type=1103 audit(1719332818.405:888): pid=5625 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:58.511631 sshd[5623]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:58.512000 audit[5623]: USER_END pid=5623 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:58.514676 systemd[1]: sshd@34-10.0.0.104:22-10.0.0.1:52390.service: Deactivated successfully. Jun 25 16:26:58.515619 systemd[1]: session-35.scope: Deactivated successfully. Jun 25 16:26:58.516217 systemd-logind[1278]: Session 35 logged out. Waiting for processes to exit. Jun 25 16:26:58.517103 systemd-logind[1278]: Removed session 35. Jun 25 16:26:58.517850 kernel: audit: type=1106 audit(1719332818.512:889): pid=5623 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:58.518157 kernel: audit: type=1104 audit(1719332818.512:890): pid=5623 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:58.512000 audit[5623]: CRED_DISP pid=5623 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:26:58.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.104:22-10.0.0.1:52390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:01.737205 kubelet[2308]: E0625 16:27:01.734559 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:27:02.352011 systemd[1]: run-containerd-runc-k8s.io-311b79d3e7948322cdd574241099f61faf4238388b8e6a7e55da0a671a572758-runc.PwXeDg.mount: Deactivated successfully. Jun 25 16:27:03.526795 systemd[1]: Started sshd@35-10.0.0.104:22-10.0.0.1:52404.service - OpenSSH per-connection server daemon (10.0.0.1:52404). Jun 25 16:27:03.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.104:22-10.0.0.1:52404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:03.530195 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:27:03.530383 kernel: audit: type=1130 audit(1719332823.526:892): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.104:22-10.0.0.1:52404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:03.564000 audit[5659]: USER_ACCT pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:03.566098 sshd[5659]: Accepted publickey for core from 10.0.0.1 port 52404 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:27:03.567009 sshd[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:03.565000 audit[5659]: CRED_ACQ pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:03.573967 systemd-logind[1278]: New session 36 of user core. Jun 25 16:27:03.591609 kernel: audit: type=1101 audit(1719332823.564:893): pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:03.591654 kernel: audit: type=1103 audit(1719332823.565:894): pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:03.591682 kernel: audit: type=1006 audit(1719332823.565:895): pid=5659 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jun 25 16:27:03.591704 kernel: audit: type=1300 audit(1719332823.565:895): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff51a13ec0 a2=3 a3=7f9d05565480 items=0 ppid=1 pid=5659 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:03.591730 kernel: audit: type=1327 audit(1719332823.565:895): proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:03.565000 audit[5659]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff51a13ec0 a2=3 a3=7f9d05565480 items=0 ppid=1 pid=5659 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:03.565000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:03.590614 systemd[1]: Started session-36.scope - Session 36 of User core. Jun 25 16:27:03.604000 audit[5659]: USER_START pid=5659 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:03.606000 audit[5661]: CRED_ACQ pid=5661 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:03.688204 kernel: audit: type=1105 audit(1719332823.604:896): pid=5659 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:03.688399 kernel: audit: type=1103 audit(1719332823.606:897): pid=5661 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:03.784351 sshd[5659]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:03.784000 audit[5659]: USER_END pid=5659 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:03.786534 systemd[1]: sshd@35-10.0.0.104:22-10.0.0.1:52404.service: Deactivated successfully. Jun 25 16:27:03.787296 systemd[1]: session-36.scope: Deactivated successfully. Jun 25 16:27:03.787759 systemd-logind[1278]: Session 36 logged out. Waiting for processes to exit. Jun 25 16:27:03.788473 systemd-logind[1278]: Removed session 36. Jun 25 16:27:03.784000 audit[5659]: CRED_DISP pid=5659 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:03.870374 kernel: audit: type=1106 audit(1719332823.784:898): pid=5659 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:03.870553 kernel: audit: type=1104 audit(1719332823.784:899): pid=5659 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:03.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.104:22-10.0.0.1:52404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:05.734557 kubelet[2308]: E0625 16:27:05.734427 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:27:08.803357 systemd[1]: Started sshd@36-10.0.0.104:22-10.0.0.1:57480.service - OpenSSH per-connection server daemon (10.0.0.1:57480). Jun 25 16:27:08.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.104:22-10.0.0.1:57480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:08.805356 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:27:08.805416 kernel: audit: type=1130 audit(1719332828.802:901): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.104:22-10.0.0.1:57480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:08.835000 audit[5679]: USER_ACCT pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:08.837204 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 57480 ssh2: RSA SHA256:3rMeAYqoNn4w3L8cNk/mI+UYoLsMELR6uQpfNNrLbJA Jun 25 16:27:08.838841 sshd[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:08.837000 audit[5679]: CRED_ACQ pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:08.843387 systemd-logind[1278]: New session 37 of user core. Jun 25 16:27:08.860836 kernel: audit: type=1101 audit(1719332828.835:902): pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:08.860983 kernel: audit: type=1103 audit(1719332828.837:903): pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:08.861018 kernel: audit: type=1006 audit(1719332828.837:904): pid=5679 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jun 25 16:27:08.837000 audit[5679]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffed705690 a2=3 a3=7fcc1ead0480 items=0 ppid=1 pid=5679 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:08.867044 kernel: audit: type=1300 audit(1719332828.837:904): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffed705690 a2=3 a3=7fcc1ead0480 items=0 ppid=1 pid=5679 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:08.867217 kernel: audit: type=1327 audit(1719332828.837:904): proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:08.837000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:08.870212 systemd[1]: Started session-37.scope - Session 37 of User core. Jun 25 16:27:08.876000 audit[5679]: USER_START pid=5679 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:08.878000 audit[5690]: CRED_ACQ pid=5690 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:08.886723 kernel: audit: type=1105 audit(1719332828.876:905): pid=5679 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:08.886913 kernel: audit: type=1103 audit(1719332828.878:906): pid=5690 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:08.977479 sshd[5679]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:08.978000 audit[5679]: USER_END pid=5679 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:08.980532 systemd[1]: sshd@36-10.0.0.104:22-10.0.0.1:57480.service: Deactivated successfully. Jun 25 16:27:08.981348 systemd[1]: session-37.scope: Deactivated successfully. Jun 25 16:27:08.982004 systemd-logind[1278]: Session 37 logged out. Waiting for processes to exit. Jun 25 16:27:08.982788 systemd-logind[1278]: Removed session 37. Jun 25 16:27:08.978000 audit[5679]: CRED_DISP pid=5679 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:08.994532 kernel: audit: type=1106 audit(1719332828.978:907): pid=5679 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:08.994600 kernel: audit: type=1104 audit(1719332828.978:908): pid=5679 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:27:08.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.104:22-10.0.0.1:57480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:09.644000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:09.644000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d29de0 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:27:09.644000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:09.644000 audit[2196]: AVC avc: denied { watch } for pid=2196 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c578,c601 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:09.644000 audit[2196]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001d74e40 a2=fc6 a3=0 items=0 ppid=2027 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c578,c601 key=(null) Jun 25 16:27:09.644000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:27:09.746000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:09.746000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:09.746000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=65 a1=c011f40030 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:27:09.746000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=64 a1=c0107962c0 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:27:09.746000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:27:09.746000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:27:09.746000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:09.746000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=65 a1=c011b5f380 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:27:09.746000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:27:09.746000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:09.746000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=67 a1=c011b5f410 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:27:09.746000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:27:09.746000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:09.746000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=65 a1=c010666c60 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:27:09.746000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jun 25 16:27:09.747000 audit[2189]: AVC avc: denied { watch } for pid=2189 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c140,c478 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:27:09.747000 audit[2189]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=65 a1=c011a06180 a2=fc6 a3=0 items=0 ppid=2028 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c140,c478 key=(null) Jun 25 16:27:09.747000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313034002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572